00:00:00.001 Started by upstream project "autotest-per-patch" build number 127093 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.074 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.075 The recommended git tool is: git 00:00:00.075 using credential 00000000-0000-0000-0000-000000000002 00:00:00.077 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.114 Fetching changes from the remote Git repository 00:00:00.115 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.155 Using shallow fetch with depth 1 00:00:00.155 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.155 > git --version # timeout=10 00:00:00.184 > git --version # 'git version 2.39.2' 00:00:00.184 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.210 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.210 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.214 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.225 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.236 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:05.236 > git config core.sparsecheckout # timeout=10 00:00:05.245 > git read-tree -mu HEAD # timeout=10 00:00:05.262 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:05.293 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:05.293 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:05.378 [Pipeline] Start of Pipeline 00:00:05.393 [Pipeline] library 00:00:05.394 Loading library shm_lib@master 00:00:05.394 Library shm_lib@master is cached. Copying from home. 00:00:05.409 [Pipeline] node 00:00:05.417 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.419 [Pipeline] { 00:00:05.426 [Pipeline] catchError 00:00:05.427 [Pipeline] { 00:00:05.438 [Pipeline] wrap 00:00:05.446 [Pipeline] { 00:00:05.452 [Pipeline] stage 00:00:05.453 [Pipeline] { (Prologue) 00:00:05.645 [Pipeline] sh 00:00:05.927 + logger -p user.info -t JENKINS-CI 00:00:05.946 [Pipeline] echo 00:00:05.947 Node: GP8 00:00:05.956 [Pipeline] sh 00:00:06.254 [Pipeline] setCustomBuildProperty 00:00:06.266 [Pipeline] echo 00:00:06.268 Cleanup processes 00:00:06.274 [Pipeline] sh 00:00:06.557 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.557 1824652 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.567 [Pipeline] sh 00:00:06.844 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.844 ++ grep -v 'sudo pgrep' 00:00:06.844 ++ awk '{print $1}' 00:00:06.844 + sudo kill -9 00:00:06.844 + true 00:00:06.858 [Pipeline] cleanWs 00:00:06.868 [WS-CLEANUP] Deleting project workspace... 00:00:06.868 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.874 [WS-CLEANUP] done 00:00:06.878 [Pipeline] setCustomBuildProperty 00:00:06.892 [Pipeline] sh 00:00:07.169 + sudo git config --global --replace-all safe.directory '*' 00:00:07.234 [Pipeline] httpRequest 00:00:07.259 [Pipeline] echo 00:00:07.261 Sorcerer 10.211.164.101 is alive 00:00:07.269 [Pipeline] httpRequest 00:00:07.273 HttpMethod: GET 00:00:07.273 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.274 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.292 Response Code: HTTP/1.1 200 OK 00:00:07.293 Success: Status code 200 is in the accepted range: 200,404 00:00:07.293 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:13.051 [Pipeline] sh 00:00:13.333 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:13.608 [Pipeline] httpRequest 00:00:13.627 [Pipeline] echo 00:00:13.629 Sorcerer 10.211.164.101 is alive 00:00:13.638 [Pipeline] httpRequest 00:00:13.642 HttpMethod: GET 00:00:13.642 URL: http://10.211.164.101/packages/spdk_da8d49b2f0fe0c69b320f8d931ff57e6e6df1c0f.tar.gz 00:00:13.643 Sending request to url: http://10.211.164.101/packages/spdk_da8d49b2f0fe0c69b320f8d931ff57e6e6df1c0f.tar.gz 00:00:13.667 Response Code: HTTP/1.1 200 OK 00:00:13.668 Success: Status code 200 is in the accepted range: 200,404 00:00:13.668 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_da8d49b2f0fe0c69b320f8d931ff57e6e6df1c0f.tar.gz 00:00:56.323 [Pipeline] sh 00:00:56.605 + tar --no-same-owner -xf spdk_da8d49b2f0fe0c69b320f8d931ff57e6e6df1c0f.tar.gz 00:01:03.193 [Pipeline] sh 00:01:03.487 + git -C spdk log --oneline -n5 00:01:03.487 da8d49b2f python/rpc: Replace bdev.py with generated rpc's 00:01:03.487 8711e7e9b autotest: reduce accel tests runs with SPDK_TEST_ACCEL flag 00:01:03.487 50222f810 configure: don't exit on non Intel platforms 00:01:03.488 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:01:03.488 ba69d4678 event/scheduler: remove custom opts from static scheduler 00:01:03.534 [Pipeline] } 00:01:03.553 [Pipeline] // stage 00:01:03.564 [Pipeline] stage 00:01:03.566 [Pipeline] { (Prepare) 00:01:03.586 [Pipeline] writeFile 00:01:03.606 [Pipeline] sh 00:01:03.889 + logger -p user.info -t JENKINS-CI 00:01:03.904 [Pipeline] sh 00:01:04.220 + logger -p user.info -t JENKINS-CI 00:01:04.233 [Pipeline] sh 00:01:04.516 + cat autorun-spdk.conf 00:01:04.516 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.516 SPDK_TEST_NVMF=1 00:01:04.516 SPDK_TEST_NVME_CLI=1 00:01:04.516 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.516 SPDK_TEST_NVMF_NICS=e810 00:01:04.516 SPDK_TEST_VFIOUSER=1 00:01:04.516 SPDK_RUN_UBSAN=1 00:01:04.516 NET_TYPE=phy 00:01:04.523 RUN_NIGHTLY=0 00:01:04.528 [Pipeline] readFile 00:01:04.555 [Pipeline] withEnv 00:01:04.556 [Pipeline] { 00:01:04.570 [Pipeline] sh 00:01:04.852 + set -ex 00:01:04.852 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:04.852 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:04.852 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.852 ++ SPDK_TEST_NVMF=1 00:01:04.852 ++ SPDK_TEST_NVME_CLI=1 00:01:04.852 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.852 ++ SPDK_TEST_NVMF_NICS=e810 00:01:04.852 ++ SPDK_TEST_VFIOUSER=1 00:01:04.852 ++ SPDK_RUN_UBSAN=1 00:01:04.852 ++ NET_TYPE=phy 00:01:04.852 ++ RUN_NIGHTLY=0 00:01:04.852 + case $SPDK_TEST_NVMF_NICS in 00:01:04.852 + DRIVERS=ice 00:01:04.852 + [[ tcp == \r\d\m\a ]] 00:01:04.852 + [[ -n ice ]] 00:01:04.852 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:04.852 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:04.852 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:04.852 rmmod: ERROR: Module irdma is not currently loaded 00:01:04.852 rmmod: ERROR: Module i40iw is not currently loaded 00:01:04.852 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:04.852 + true 00:01:04.852 + for D in $DRIVERS 00:01:04.852 + sudo modprobe ice 00:01:04.852 + exit 0 00:01:04.862 [Pipeline] } 00:01:04.879 [Pipeline] // withEnv 00:01:04.883 [Pipeline] } 00:01:04.899 [Pipeline] // stage 00:01:04.908 [Pipeline] catchError 00:01:04.910 [Pipeline] { 00:01:04.925 [Pipeline] timeout 00:01:04.925 Timeout set to expire in 50 min 00:01:04.927 [Pipeline] { 00:01:04.939 [Pipeline] stage 00:01:04.940 [Pipeline] { (Tests) 00:01:04.950 [Pipeline] sh 00:01:05.229 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.229 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.229 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.229 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:05.229 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:05.229 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:05.229 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:05.229 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:05.229 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:05.229 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:05.229 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:05.229 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.229 + source /etc/os-release 00:01:05.229 ++ NAME='Fedora Linux' 00:01:05.229 ++ VERSION='38 (Cloud Edition)' 00:01:05.229 ++ ID=fedora 00:01:05.229 ++ VERSION_ID=38 00:01:05.229 ++ VERSION_CODENAME= 00:01:05.229 ++ PLATFORM_ID=platform:f38 00:01:05.229 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:05.229 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:05.229 ++ LOGO=fedora-logo-icon 00:01:05.229 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:05.229 ++ HOME_URL=https://fedoraproject.org/ 00:01:05.229 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:05.229 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:05.229 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:05.229 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:05.229 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:05.229 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:05.229 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:05.229 ++ SUPPORT_END=2024-05-14 00:01:05.229 ++ VARIANT='Cloud Edition' 00:01:05.229 ++ VARIANT_ID=cloud 00:01:05.229 + uname -a 00:01:05.229 Linux spdk-gp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:05.229 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:06.607 Hugepages 00:01:06.607 node hugesize free / total 00:01:06.607 node0 1048576kB 0 / 0 00:01:06.607 node0 2048kB 0 / 0 00:01:06.607 node1 1048576kB 0 / 0 00:01:06.607 node1 2048kB 0 / 0 00:01:06.607 00:01:06.607 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:06.607 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:06.607 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:06.607 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:06.607 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:06.607 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:06.607 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:06.607 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:06.607 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:06.607 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:06.607 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:06.607 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:06.607 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:06.607 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:06.607 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:06.607 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:06.607 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:06.607 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:06.607 + rm -f /tmp/spdk-ld-path 00:01:06.607 + source autorun-spdk.conf 00:01:06.607 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.607 ++ SPDK_TEST_NVMF=1 00:01:06.607 ++ SPDK_TEST_NVME_CLI=1 00:01:06.607 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:06.607 ++ SPDK_TEST_NVMF_NICS=e810 00:01:06.607 ++ SPDK_TEST_VFIOUSER=1 00:01:06.607 ++ SPDK_RUN_UBSAN=1 00:01:06.607 ++ NET_TYPE=phy 00:01:06.607 ++ RUN_NIGHTLY=0 00:01:06.607 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:06.607 + [[ -n '' ]] 00:01:06.607 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:06.867 + for M in /var/spdk/build-*-manifest.txt 00:01:06.867 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:06.867 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:06.867 + for M in /var/spdk/build-*-manifest.txt 00:01:06.867 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:06.867 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:06.867 ++ uname 00:01:06.867 + [[ Linux == \L\i\n\u\x ]] 00:01:06.867 + sudo dmesg -T 00:01:06.867 + sudo dmesg --clear 00:01:06.867 + dmesg_pid=1825424 00:01:06.867 + [[ Fedora Linux == FreeBSD ]] 00:01:06.867 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:06.867 + sudo dmesg -Tw 00:01:06.867 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:06.867 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:06.867 + [[ -x /usr/src/fio-static/fio ]] 00:01:06.867 + export FIO_BIN=/usr/src/fio-static/fio 00:01:06.867 + FIO_BIN=/usr/src/fio-static/fio 00:01:06.867 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:06.867 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:06.867 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:06.867 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:06.867 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:06.867 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:06.867 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:06.867 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:06.867 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:06.867 Test configuration: 00:01:06.867 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.867 SPDK_TEST_NVMF=1 00:01:06.867 SPDK_TEST_NVME_CLI=1 00:01:06.867 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:06.867 SPDK_TEST_NVMF_NICS=e810 00:01:06.867 SPDK_TEST_VFIOUSER=1 00:01:06.867 SPDK_RUN_UBSAN=1 00:01:06.867 NET_TYPE=phy 00:01:06.867 RUN_NIGHTLY=0 19:55:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:06.867 19:55:10 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:06.867 19:55:10 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:06.867 19:55:10 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:06.867 19:55:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.867 19:55:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.867 19:55:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.867 19:55:10 -- paths/export.sh@5 -- $ export PATH 00:01:06.867 19:55:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.867 19:55:10 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:06.867 19:55:10 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:06.867 19:55:10 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721843710.XXXXXX 00:01:06.867 19:55:10 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721843710.kMSeS0 00:01:06.867 19:55:10 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:06.867 19:55:10 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:06.867 19:55:10 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:06.867 19:55:10 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:06.867 19:55:10 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:06.867 19:55:10 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:06.868 19:55:10 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:06.868 19:55:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.868 19:55:10 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:06.868 19:55:10 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:06.868 19:55:10 -- pm/common@17 -- $ local monitor 00:01:06.868 19:55:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.868 19:55:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.868 19:55:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.868 19:55:10 -- pm/common@21 -- $ date +%s 00:01:06.868 19:55:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.868 19:55:10 -- pm/common@21 -- $ date +%s 00:01:06.868 19:55:10 -- pm/common@21 -- $ date +%s 00:01:06.868 19:55:10 -- pm/common@25 -- $ sleep 1 00:01:06.868 19:55:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721843710 00:01:06.868 19:55:10 -- pm/common@21 -- $ date +%s 00:01:06.868 19:55:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721843710 00:01:06.868 19:55:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721843710 00:01:06.868 19:55:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721843710 00:01:07.128 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721843710_collect-vmstat.pm.log 00:01:07.128 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721843710_collect-cpu-load.pm.log 00:01:07.128 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721843710_collect-cpu-temp.pm.log 00:01:07.128 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721843710_collect-bmc-pm.bmc.pm.log 00:01:08.067 19:55:11 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:08.067 19:55:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:08.067 19:55:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:08.067 19:55:11 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:08.067 19:55:11 -- spdk/autobuild.sh@16 -- $ date -u 00:01:08.067 Wed Jul 24 05:55:11 PM UTC 2024 00:01:08.067 19:55:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:08.067 v24.09-pre-312-gda8d49b2f 00:01:08.067 19:55:11 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:08.067 19:55:11 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:08.067 19:55:11 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:08.067 19:55:11 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:08.068 19:55:11 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:08.068 19:55:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.068 ************************************ 00:01:08.068 START TEST ubsan 00:01:08.068 ************************************ 00:01:08.068 19:55:11 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:08.068 using ubsan 00:01:08.068 00:01:08.068 real 0m0.000s 00:01:08.068 user 0m0.000s 00:01:08.068 sys 0m0.000s 00:01:08.068 19:55:11 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:08.068 19:55:11 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:08.068 ************************************ 00:01:08.068 END TEST ubsan 00:01:08.068 ************************************ 00:01:08.068 19:55:11 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:08.068 19:55:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:08.068 19:55:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:08.068 19:55:11 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:08.068 19:55:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:08.068 19:55:11 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:08.068 19:55:11 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:08.068 19:55:11 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:08.068 19:55:11 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:08.068 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:08.068 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:08.636 Using 'verbs' RDMA provider 00:01:24.899 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:39.791 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:39.791 Creating mk/config.mk...done. 00:01:39.791 Creating mk/cc.flags.mk...done. 00:01:39.791 Type 'make' to build. 00:01:39.791 19:55:42 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:39.791 19:55:42 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:39.791 19:55:42 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:39.791 19:55:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.791 ************************************ 00:01:39.791 START TEST make 00:01:39.791 ************************************ 00:01:39.791 19:55:42 make -- common/autotest_common.sh@1125 -- $ make -j48 00:01:39.791 make[1]: Nothing to be done for 'all'. 00:01:41.178 The Meson build system 00:01:41.178 Version: 1.3.1 00:01:41.178 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:41.178 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.178 Build type: native build 00:01:41.178 Project name: libvfio-user 00:01:41.178 Project version: 0.0.1 00:01:41.178 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:41.178 C linker for the host machine: cc ld.bfd 2.39-16 00:01:41.178 Host machine cpu family: x86_64 00:01:41.178 Host machine cpu: x86_64 00:01:41.178 Run-time dependency threads found: YES 00:01:41.178 Library dl found: YES 00:01:41.178 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:41.178 Run-time dependency json-c found: YES 0.17 00:01:41.178 Run-time dependency cmocka found: YES 1.1.7 00:01:41.178 Program pytest-3 found: NO 00:01:41.178 Program flake8 found: NO 00:01:41.178 Program misspell-fixer found: NO 00:01:41.178 Program restructuredtext-lint found: NO 00:01:41.178 Program valgrind found: YES (/usr/bin/valgrind) 00:01:41.178 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:41.178 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:41.178 Compiler for C supports arguments -Wwrite-strings: YES 00:01:41.178 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:41.178 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:41.178 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:41.178 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:41.178 Build targets in project: 8 00:01:41.178 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:41.178 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:41.178 00:01:41.178 libvfio-user 0.0.1 00:01:41.178 00:01:41.178 User defined options 00:01:41.178 buildtype : debug 00:01:41.178 default_library: shared 00:01:41.178 libdir : /usr/local/lib 00:01:41.178 00:01:41.178 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:41.755 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:41.755 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:42.023 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:42.023 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:42.023 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:42.023 [5/37] Compiling C object samples/null.p/null.c.o 00:01:42.023 [6/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:42.023 [7/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:42.023 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:42.023 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:42.023 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:42.023 [11/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:42.023 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:42.023 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:42.023 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:42.023 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:42.023 [16/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:42.023 [17/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:42.023 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:42.023 [19/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:42.023 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:42.023 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:42.023 [22/37] Compiling C object samples/server.p/server.c.o 00:01:42.023 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:42.023 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:42.023 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:42.283 [26/37] Compiling C object samples/client.p/client.c.o 00:01:42.283 [27/37] Linking target samples/client 00:01:42.283 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:42.283 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:42.284 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:42.284 [31/37] Linking target test/unit_tests 00:01:42.545 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:42.809 [33/37] Linking target samples/null 00:01:42.809 [34/37] Linking target samples/server 00:01:42.809 [35/37] Linking target samples/gpio-pci-idio-16 00:01:42.809 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:42.809 [37/37] Linking target samples/lspci 00:01:42.809 INFO: autodetecting backend as ninja 00:01:42.809 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:42.809 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:43.785 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:43.785 ninja: no work to do. 00:01:50.358 The Meson build system 00:01:50.358 Version: 1.3.1 00:01:50.358 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:50.358 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:50.358 Build type: native build 00:01:50.358 Program cat found: YES (/usr/bin/cat) 00:01:50.358 Project name: DPDK 00:01:50.358 Project version: 24.03.0 00:01:50.358 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:50.358 C linker for the host machine: cc ld.bfd 2.39-16 00:01:50.358 Host machine cpu family: x86_64 00:01:50.358 Host machine cpu: x86_64 00:01:50.358 Message: ## Building in Developer Mode ## 00:01:50.358 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:50.358 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:50.358 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:50.358 Program python3 found: YES (/usr/bin/python3) 00:01:50.358 Program cat found: YES (/usr/bin/cat) 00:01:50.358 Compiler for C supports arguments -march=native: YES 00:01:50.358 Checking for size of "void *" : 8 00:01:50.358 Checking for size of "void *" : 8 (cached) 00:01:50.358 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:50.358 Library m found: YES 00:01:50.358 Library numa found: YES 00:01:50.358 Has header "numaif.h" : YES 00:01:50.358 Library fdt found: NO 00:01:50.358 Library execinfo found: NO 00:01:50.358 Has header "execinfo.h" : YES 00:01:50.358 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:50.358 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:50.358 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:50.358 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:50.358 Run-time dependency openssl found: YES 3.0.9 00:01:50.358 Run-time dependency libpcap found: YES 1.10.4 00:01:50.358 Has header "pcap.h" with dependency libpcap: YES 00:01:50.358 Compiler for C supports arguments -Wcast-qual: YES 00:01:50.358 Compiler for C supports arguments -Wdeprecated: YES 00:01:50.358 Compiler for C supports arguments -Wformat: YES 00:01:50.358 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:50.358 Compiler for C supports arguments -Wformat-security: NO 00:01:50.358 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:50.358 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:50.358 Compiler for C supports arguments -Wnested-externs: YES 00:01:50.358 Compiler for C supports arguments -Wold-style-definition: YES 00:01:50.358 Compiler for C supports arguments -Wpointer-arith: YES 00:01:50.358 Compiler for C supports arguments -Wsign-compare: YES 00:01:50.358 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:50.358 Compiler for C supports arguments -Wundef: YES 00:01:50.358 Compiler for C supports arguments -Wwrite-strings: YES 00:01:50.358 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:50.358 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:50.358 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:50.358 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:50.358 Program objdump found: YES (/usr/bin/objdump) 00:01:50.358 Compiler for C supports arguments -mavx512f: YES 00:01:50.358 Checking if "AVX512 checking" compiles: YES 00:01:50.358 Fetching value of define "__SSE4_2__" : 1 00:01:50.358 Fetching value of define "__AES__" : 1 00:01:50.358 Fetching value of define "__AVX__" : 1 00:01:50.358 Fetching value of define "__AVX2__" : (undefined) 00:01:50.358 Fetching value of define "__AVX512BW__" : (undefined) 00:01:50.358 Fetching value of define "__AVX512CD__" : (undefined) 00:01:50.358 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:50.358 Fetching value of define "__AVX512F__" : (undefined) 00:01:50.358 Fetching value of define "__AVX512VL__" : (undefined) 00:01:50.358 Fetching value of define "__PCLMUL__" : 1 00:01:50.358 Fetching value of define "__RDRND__" : 1 00:01:50.358 Fetching value of define "__RDSEED__" : (undefined) 00:01:50.358 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:50.358 Fetching value of define "__znver1__" : (undefined) 00:01:50.358 Fetching value of define "__znver2__" : (undefined) 00:01:50.358 Fetching value of define "__znver3__" : (undefined) 00:01:50.358 Fetching value of define "__znver4__" : (undefined) 00:01:50.358 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:50.358 Message: lib/log: Defining dependency "log" 00:01:50.358 Message: lib/kvargs: Defining dependency "kvargs" 00:01:50.358 Message: lib/telemetry: Defining dependency "telemetry" 00:01:50.358 Checking for function "getentropy" : NO 00:01:50.358 Message: lib/eal: Defining dependency "eal" 00:01:50.358 Message: lib/ring: Defining dependency "ring" 00:01:50.358 Message: lib/rcu: Defining dependency "rcu" 00:01:50.358 Message: lib/mempool: Defining dependency "mempool" 00:01:50.358 Message: lib/mbuf: Defining dependency "mbuf" 00:01:50.358 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:50.358 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:50.358 Compiler for C supports arguments -mpclmul: YES 00:01:50.358 Compiler for C supports arguments -maes: YES 00:01:50.358 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:50.358 Compiler for C supports arguments -mavx512bw: YES 00:01:50.358 Compiler for C supports arguments -mavx512dq: YES 00:01:50.358 Compiler for C supports arguments -mavx512vl: YES 00:01:50.358 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:50.358 Compiler for C supports arguments -mavx2: YES 00:01:50.358 Compiler for C supports arguments -mavx: YES 00:01:50.358 Message: lib/net: Defining dependency "net" 00:01:50.358 Message: lib/meter: Defining dependency "meter" 00:01:50.358 Message: lib/ethdev: Defining dependency "ethdev" 00:01:50.358 Message: lib/pci: Defining dependency "pci" 00:01:50.358 Message: lib/cmdline: Defining dependency "cmdline" 00:01:50.358 Message: lib/hash: Defining dependency "hash" 00:01:50.358 Message: lib/timer: Defining dependency "timer" 00:01:50.358 Message: lib/compressdev: Defining dependency "compressdev" 00:01:50.358 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:50.358 Message: lib/dmadev: Defining dependency "dmadev" 00:01:50.358 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:50.358 Message: lib/power: Defining dependency "power" 00:01:50.358 Message: lib/reorder: Defining dependency "reorder" 00:01:50.358 Message: lib/security: Defining dependency "security" 00:01:50.358 Has header "linux/userfaultfd.h" : YES 00:01:50.358 Has header "linux/vduse.h" : YES 00:01:50.358 Message: lib/vhost: Defining dependency "vhost" 00:01:50.358 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:50.358 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:50.358 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:50.358 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:50.358 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:50.358 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:50.359 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:50.359 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:50.359 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:50.359 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:50.359 Program doxygen found: YES (/usr/bin/doxygen) 00:01:50.359 Configuring doxy-api-html.conf using configuration 00:01:50.359 Configuring doxy-api-man.conf using configuration 00:01:50.359 Program mandb found: YES (/usr/bin/mandb) 00:01:50.359 Program sphinx-build found: NO 00:01:50.359 Configuring rte_build_config.h using configuration 00:01:50.359 Message: 00:01:50.359 ================= 00:01:50.359 Applications Enabled 00:01:50.359 ================= 00:01:50.359 00:01:50.359 apps: 00:01:50.359 00:01:50.359 00:01:50.359 Message: 00:01:50.359 ================= 00:01:50.359 Libraries Enabled 00:01:50.359 ================= 00:01:50.359 00:01:50.359 libs: 00:01:50.359 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:50.359 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:50.359 cryptodev, dmadev, power, reorder, security, vhost, 00:01:50.359 00:01:50.359 Message: 00:01:50.359 =============== 00:01:50.359 Drivers Enabled 00:01:50.359 =============== 00:01:50.359 00:01:50.359 common: 00:01:50.359 00:01:50.359 bus: 00:01:50.359 pci, vdev, 00:01:50.359 mempool: 00:01:50.359 ring, 00:01:50.359 dma: 00:01:50.359 00:01:50.359 net: 00:01:50.359 00:01:50.359 crypto: 00:01:50.359 00:01:50.359 compress: 00:01:50.359 00:01:50.359 vdpa: 00:01:50.359 00:01:50.359 00:01:50.359 Message: 00:01:50.359 ================= 00:01:50.359 Content Skipped 00:01:50.359 ================= 00:01:50.359 00:01:50.359 apps: 00:01:50.359 dumpcap: explicitly disabled via build config 00:01:50.359 graph: explicitly disabled via build config 00:01:50.359 pdump: explicitly disabled via build config 00:01:50.359 proc-info: explicitly disabled via build config 00:01:50.359 test-acl: explicitly disabled via build config 00:01:50.359 test-bbdev: explicitly disabled via build config 00:01:50.359 test-cmdline: explicitly disabled via build config 00:01:50.359 test-compress-perf: explicitly disabled via build config 00:01:50.359 test-crypto-perf: explicitly disabled via build config 00:01:50.359 test-dma-perf: explicitly disabled via build config 00:01:50.359 test-eventdev: explicitly disabled via build config 00:01:50.359 test-fib: explicitly disabled via build config 00:01:50.359 test-flow-perf: explicitly disabled via build config 00:01:50.359 test-gpudev: explicitly disabled via build config 00:01:50.359 test-mldev: explicitly disabled via build config 00:01:50.359 test-pipeline: explicitly disabled via build config 00:01:50.359 test-pmd: explicitly disabled via build config 00:01:50.359 test-regex: explicitly disabled via build config 00:01:50.359 test-sad: explicitly disabled via build config 00:01:50.359 test-security-perf: explicitly disabled via build config 00:01:50.359 00:01:50.359 libs: 00:01:50.359 argparse: explicitly disabled via build config 00:01:50.359 metrics: explicitly disabled via build config 00:01:50.359 acl: explicitly disabled via build config 00:01:50.359 bbdev: explicitly disabled via build config 00:01:50.359 bitratestats: explicitly disabled via build config 00:01:50.359 bpf: explicitly disabled via build config 00:01:50.359 cfgfile: explicitly disabled via build config 00:01:50.359 distributor: explicitly disabled via build config 00:01:50.359 efd: explicitly disabled via build config 00:01:50.359 eventdev: explicitly disabled via build config 00:01:50.359 dispatcher: explicitly disabled via build config 00:01:50.359 gpudev: explicitly disabled via build config 00:01:50.359 gro: explicitly disabled via build config 00:01:50.359 gso: explicitly disabled via build config 00:01:50.359 ip_frag: explicitly disabled via build config 00:01:50.359 jobstats: explicitly disabled via build config 00:01:50.359 latencystats: explicitly disabled via build config 00:01:50.359 lpm: explicitly disabled via build config 00:01:50.359 member: explicitly disabled via build config 00:01:50.359 pcapng: explicitly disabled via build config 00:01:50.359 rawdev: explicitly disabled via build config 00:01:50.359 regexdev: explicitly disabled via build config 00:01:50.359 mldev: explicitly disabled via build config 00:01:50.359 rib: explicitly disabled via build config 00:01:50.359 sched: explicitly disabled via build config 00:01:50.359 stack: explicitly disabled via build config 00:01:50.359 ipsec: explicitly disabled via build config 00:01:50.359 pdcp: explicitly disabled via build config 00:01:50.359 fib: explicitly disabled via build config 00:01:50.359 port: explicitly disabled via build config 00:01:50.359 pdump: explicitly disabled via build config 00:01:50.359 table: explicitly disabled via build config 00:01:50.359 pipeline: explicitly disabled via build config 00:01:50.359 graph: explicitly disabled via build config 00:01:50.359 node: explicitly disabled via build config 00:01:50.359 00:01:50.359 drivers: 00:01:50.359 common/cpt: not in enabled drivers build config 00:01:50.359 common/dpaax: not in enabled drivers build config 00:01:50.359 common/iavf: not in enabled drivers build config 00:01:50.359 common/idpf: not in enabled drivers build config 00:01:50.359 common/ionic: not in enabled drivers build config 00:01:50.359 common/mvep: not in enabled drivers build config 00:01:50.359 common/octeontx: not in enabled drivers build config 00:01:50.359 bus/auxiliary: not in enabled drivers build config 00:01:50.359 bus/cdx: not in enabled drivers build config 00:01:50.359 bus/dpaa: not in enabled drivers build config 00:01:50.359 bus/fslmc: not in enabled drivers build config 00:01:50.359 bus/ifpga: not in enabled drivers build config 00:01:50.359 bus/platform: not in enabled drivers build config 00:01:50.359 bus/uacce: not in enabled drivers build config 00:01:50.359 bus/vmbus: not in enabled drivers build config 00:01:50.359 common/cnxk: not in enabled drivers build config 00:01:50.359 common/mlx5: not in enabled drivers build config 00:01:50.359 common/nfp: not in enabled drivers build config 00:01:50.359 common/nitrox: not in enabled drivers build config 00:01:50.359 common/qat: not in enabled drivers build config 00:01:50.359 common/sfc_efx: not in enabled drivers build config 00:01:50.359 mempool/bucket: not in enabled drivers build config 00:01:50.359 mempool/cnxk: not in enabled drivers build config 00:01:50.359 mempool/dpaa: not in enabled drivers build config 00:01:50.359 mempool/dpaa2: not in enabled drivers build config 00:01:50.359 mempool/octeontx: not in enabled drivers build config 00:01:50.359 mempool/stack: not in enabled drivers build config 00:01:50.359 dma/cnxk: not in enabled drivers build config 00:01:50.359 dma/dpaa: not in enabled drivers build config 00:01:50.359 dma/dpaa2: not in enabled drivers build config 00:01:50.359 dma/hisilicon: not in enabled drivers build config 00:01:50.359 dma/idxd: not in enabled drivers build config 00:01:50.359 dma/ioat: not in enabled drivers build config 00:01:50.359 dma/skeleton: not in enabled drivers build config 00:01:50.359 net/af_packet: not in enabled drivers build config 00:01:50.359 net/af_xdp: not in enabled drivers build config 00:01:50.359 net/ark: not in enabled drivers build config 00:01:50.359 net/atlantic: not in enabled drivers build config 00:01:50.359 net/avp: not in enabled drivers build config 00:01:50.359 net/axgbe: not in enabled drivers build config 00:01:50.359 net/bnx2x: not in enabled drivers build config 00:01:50.359 net/bnxt: not in enabled drivers build config 00:01:50.359 net/bonding: not in enabled drivers build config 00:01:50.359 net/cnxk: not in enabled drivers build config 00:01:50.359 net/cpfl: not in enabled drivers build config 00:01:50.359 net/cxgbe: not in enabled drivers build config 00:01:50.359 net/dpaa: not in enabled drivers build config 00:01:50.359 net/dpaa2: not in enabled drivers build config 00:01:50.359 net/e1000: not in enabled drivers build config 00:01:50.359 net/ena: not in enabled drivers build config 00:01:50.359 net/enetc: not in enabled drivers build config 00:01:50.359 net/enetfec: not in enabled drivers build config 00:01:50.359 net/enic: not in enabled drivers build config 00:01:50.359 net/failsafe: not in enabled drivers build config 00:01:50.359 net/fm10k: not in enabled drivers build config 00:01:50.359 net/gve: not in enabled drivers build config 00:01:50.359 net/hinic: not in enabled drivers build config 00:01:50.359 net/hns3: not in enabled drivers build config 00:01:50.359 net/i40e: not in enabled drivers build config 00:01:50.359 net/iavf: not in enabled drivers build config 00:01:50.359 net/ice: not in enabled drivers build config 00:01:50.359 net/idpf: not in enabled drivers build config 00:01:50.359 net/igc: not in enabled drivers build config 00:01:50.359 net/ionic: not in enabled drivers build config 00:01:50.359 net/ipn3ke: not in enabled drivers build config 00:01:50.359 net/ixgbe: not in enabled drivers build config 00:01:50.359 net/mana: not in enabled drivers build config 00:01:50.359 net/memif: not in enabled drivers build config 00:01:50.359 net/mlx4: not in enabled drivers build config 00:01:50.359 net/mlx5: not in enabled drivers build config 00:01:50.359 net/mvneta: not in enabled drivers build config 00:01:50.359 net/mvpp2: not in enabled drivers build config 00:01:50.359 net/netvsc: not in enabled drivers build config 00:01:50.359 net/nfb: not in enabled drivers build config 00:01:50.359 net/nfp: not in enabled drivers build config 00:01:50.359 net/ngbe: not in enabled drivers build config 00:01:50.359 net/null: not in enabled drivers build config 00:01:50.359 net/octeontx: not in enabled drivers build config 00:01:50.359 net/octeon_ep: not in enabled drivers build config 00:01:50.359 net/pcap: not in enabled drivers build config 00:01:50.359 net/pfe: not in enabled drivers build config 00:01:50.359 net/qede: not in enabled drivers build config 00:01:50.359 net/ring: not in enabled drivers build config 00:01:50.359 net/sfc: not in enabled drivers build config 00:01:50.359 net/softnic: not in enabled drivers build config 00:01:50.359 net/tap: not in enabled drivers build config 00:01:50.359 net/thunderx: not in enabled drivers build config 00:01:50.359 net/txgbe: not in enabled drivers build config 00:01:50.360 net/vdev_netvsc: not in enabled drivers build config 00:01:50.360 net/vhost: not in enabled drivers build config 00:01:50.360 net/virtio: not in enabled drivers build config 00:01:50.360 net/vmxnet3: not in enabled drivers build config 00:01:50.360 raw/*: missing internal dependency, "rawdev" 00:01:50.360 crypto/armv8: not in enabled drivers build config 00:01:50.360 crypto/bcmfs: not in enabled drivers build config 00:01:50.360 crypto/caam_jr: not in enabled drivers build config 00:01:50.360 crypto/ccp: not in enabled drivers build config 00:01:50.360 crypto/cnxk: not in enabled drivers build config 00:01:50.360 crypto/dpaa_sec: not in enabled drivers build config 00:01:50.360 crypto/dpaa2_sec: not in enabled drivers build config 00:01:50.360 crypto/ipsec_mb: not in enabled drivers build config 00:01:50.360 crypto/mlx5: not in enabled drivers build config 00:01:50.360 crypto/mvsam: not in enabled drivers build config 00:01:50.360 crypto/nitrox: not in enabled drivers build config 00:01:50.360 crypto/null: not in enabled drivers build config 00:01:50.360 crypto/octeontx: not in enabled drivers build config 00:01:50.360 crypto/openssl: not in enabled drivers build config 00:01:50.360 crypto/scheduler: not in enabled drivers build config 00:01:50.360 crypto/uadk: not in enabled drivers build config 00:01:50.360 crypto/virtio: not in enabled drivers build config 00:01:50.360 compress/isal: not in enabled drivers build config 00:01:50.360 compress/mlx5: not in enabled drivers build config 00:01:50.360 compress/nitrox: not in enabled drivers build config 00:01:50.360 compress/octeontx: not in enabled drivers build config 00:01:50.360 compress/zlib: not in enabled drivers build config 00:01:50.360 regex/*: missing internal dependency, "regexdev" 00:01:50.360 ml/*: missing internal dependency, "mldev" 00:01:50.360 vdpa/ifc: not in enabled drivers build config 00:01:50.360 vdpa/mlx5: not in enabled drivers build config 00:01:50.360 vdpa/nfp: not in enabled drivers build config 00:01:50.360 vdpa/sfc: not in enabled drivers build config 00:01:50.360 event/*: missing internal dependency, "eventdev" 00:01:50.360 baseband/*: missing internal dependency, "bbdev" 00:01:50.360 gpu/*: missing internal dependency, "gpudev" 00:01:50.360 00:01:50.360 00:01:50.360 Build targets in project: 85 00:01:50.360 00:01:50.360 DPDK 24.03.0 00:01:50.360 00:01:50.360 User defined options 00:01:50.360 buildtype : debug 00:01:50.360 default_library : shared 00:01:50.360 libdir : lib 00:01:50.360 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:50.360 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:50.360 c_link_args : 00:01:50.360 cpu_instruction_set: native 00:01:50.360 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:50.360 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:50.360 enable_docs : false 00:01:50.360 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:50.360 enable_kmods : false 00:01:50.360 max_lcores : 128 00:01:50.360 tests : false 00:01:50.360 00:01:50.360 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:50.360 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:50.625 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:50.625 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:50.625 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:50.625 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:50.625 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:50.625 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:50.625 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:50.625 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:50.625 [9/268] Linking static target lib/librte_kvargs.a 00:01:50.625 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:50.625 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:50.625 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:50.625 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:50.625 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:50.625 [15/268] Linking static target lib/librte_log.a 00:01:50.625 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:51.568 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.568 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:51.568 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:51.568 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:51.568 [21/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:51.568 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:51.568 [23/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:51.568 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:51.568 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:51.568 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:51.568 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:51.568 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:51.568 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:51.568 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:51.568 [31/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:51.568 [32/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:51.568 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:51.569 [34/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:51.569 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:51.569 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:51.569 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:51.569 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:51.569 [39/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:51.569 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:51.569 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:51.831 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:51.831 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:51.831 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:51.831 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:51.831 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:51.831 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:51.831 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:51.831 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:51.831 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:51.831 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:51.831 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:51.831 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:51.831 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:51.831 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:51.831 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:51.831 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:51.831 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:51.831 [59/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:51.831 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:51.831 [61/268] Linking static target lib/librte_telemetry.a 00:01:51.831 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:51.831 [63/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.094 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:52.094 [65/268] Linking target lib/librte_log.so.24.1 00:01:52.094 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:52.094 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:52.358 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:52.358 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:52.358 [70/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:52.358 [71/268] Linking static target lib/librte_pci.a 00:01:52.358 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:52.358 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:52.358 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:52.358 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:52.358 [76/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:52.358 [77/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:52.621 [78/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:52.621 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:52.621 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:52.621 [81/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:52.621 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:52.621 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:52.621 [84/268] Linking target lib/librte_kvargs.so.24.1 00:01:52.621 [85/268] Linking static target lib/librte_ring.a 00:01:52.621 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:52.621 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:52.621 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:52.621 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:52.621 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:52.621 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:52.621 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:52.621 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:52.621 [94/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:52.621 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:52.621 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:52.621 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:52.883 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:52.883 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:52.883 [100/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:52.883 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:52.883 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:52.883 [103/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:52.883 [104/268] Linking static target lib/librte_eal.a 00:01:52.883 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:52.883 [106/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:52.883 [107/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:52.883 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:52.883 [109/268] Linking static target lib/librte_mempool.a 00:01:52.883 [110/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:52.883 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:52.883 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:52.883 [113/268] Linking static target lib/librte_meter.a 00:01:52.883 [114/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.145 [115/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.145 [116/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:53.145 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:53.145 [118/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:53.145 [119/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:53.145 [120/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:53.145 [121/268] Linking static target lib/librte_rcu.a 00:01:53.145 [122/268] Linking target lib/librte_telemetry.so.24.1 00:01:53.145 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:53.145 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:53.145 [125/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.417 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:53.417 [127/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:53.417 [128/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:53.417 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:53.417 [130/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:53.417 [131/268] Linking static target lib/librte_net.a 00:01:53.417 [132/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:53.417 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:53.417 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:53.417 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:53.417 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:53.417 [137/268] Linking static target lib/librte_cmdline.a 00:01:53.417 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:53.725 [139/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:53.725 [140/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.725 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:53.725 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:53.725 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:53.725 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:53.725 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:53.725 [146/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:53.725 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:53.725 [148/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:53.725 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:54.001 [150/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.001 [151/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:54.001 [152/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:54.001 [153/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.001 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:54.001 [155/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:54.001 [156/268] Linking static target lib/librte_dmadev.a 00:01:54.001 [157/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:54.001 [158/268] Linking static target lib/librte_timer.a 00:01:54.001 [159/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:54.001 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:54.001 [161/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:54.001 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:54.001 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:54.260 [164/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:54.260 [165/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:54.260 [166/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.260 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:54.260 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:54.260 [169/268] Linking static target lib/librte_hash.a 00:01:54.260 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:54.260 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:54.519 [172/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:54.519 [173/268] Linking static target lib/librte_compressdev.a 00:01:54.519 [174/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:54.519 [175/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.519 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:54.519 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.519 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:54.519 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:54.519 [180/268] Linking static target lib/librte_power.a 00:01:54.519 [181/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:54.519 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:54.519 [183/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:54.777 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:54.777 [185/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.777 [186/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:54.777 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:54.777 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:54.777 [189/268] Linking static target lib/librte_reorder.a 00:01:54.777 [190/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:54.777 [191/268] Linking static target lib/librte_mbuf.a 00:01:54.777 [192/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:54.777 [193/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:54.777 [194/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:54.777 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:54.777 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:55.036 [197/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:55.036 [198/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.036 [199/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.036 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.036 [201/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:55.036 [202/268] Linking static target drivers/librte_bus_vdev.a 00:01:55.036 [203/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.036 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:55.036 [205/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:55.036 [206/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:55.036 [207/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:55.036 [208/268] Linking static target lib/librte_ethdev.a 00:01:55.036 [209/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:55.036 [210/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:55.036 [211/268] Linking static target lib/librte_security.a 00:01:55.036 [212/268] Linking static target lib/librte_cryptodev.a 00:01:55.036 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:55.036 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.036 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:55.036 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:55.036 [217/268] Linking static target drivers/librte_bus_pci.a 00:01:55.294 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.294 [219/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.294 [220/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:55.294 [221/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:55.294 [222/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:55.294 [223/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.294 [224/268] Linking static target drivers/librte_mempool_ring.a 00:01:55.552 [225/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.553 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.119 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.651 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:00.028 [229/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.028 [230/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.028 [231/268] Linking target lib/librte_eal.so.24.1 00:02:00.287 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:00.287 [233/268] Linking target lib/librte_pci.so.24.1 00:02:00.287 [234/268] Linking target lib/librte_meter.so.24.1 00:02:00.287 [235/268] Linking target lib/librte_timer.so.24.1 00:02:00.287 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:00.287 [237/268] Linking target lib/librte_ring.so.24.1 00:02:00.287 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:00.547 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:00.547 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:00.547 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:00.547 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:00.547 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:00.547 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:00.547 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:00.547 [246/268] Linking target lib/librte_mempool.so.24.1 00:02:00.805 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:00.805 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:00.805 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:00.805 [250/268] Linking target lib/librte_mbuf.so.24.1 00:02:01.070 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:01.070 [252/268] Linking target lib/librte_compressdev.so.24.1 00:02:01.070 [253/268] Linking target lib/librte_net.so.24.1 00:02:01.070 [254/268] Linking target lib/librte_reorder.so.24.1 00:02:01.070 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:01.329 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:01.329 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:01.329 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:01.329 [259/268] Linking target lib/librte_ethdev.so.24.1 00:02:01.329 [260/268] Linking target lib/librte_hash.so.24.1 00:02:01.329 [261/268] Linking target lib/librte_security.so.24.1 00:02:01.588 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:01.588 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:01.588 [264/268] Linking target lib/librte_power.so.24.1 00:02:09.711 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:09.711 [266/268] Linking static target lib/librte_vhost.a 00:02:09.970 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.230 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:10.230 INFO: autodetecting backend as ninja 00:02:10.230 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:11.165 CC lib/ut_mock/mock.o 00:02:11.165 CC lib/ut/ut.o 00:02:11.424 CC lib/log/log.o 00:02:11.424 CC lib/log/log_deprecated.o 00:02:11.424 CC lib/log/log_flags.o 00:02:11.683 LIB libspdk_ut_mock.a 00:02:11.683 LIB libspdk_ut.a 00:02:11.683 LIB libspdk_log.a 00:02:11.683 SO libspdk_ut_mock.so.6.0 00:02:11.683 SO libspdk_ut.so.2.0 00:02:11.683 SO libspdk_log.so.7.0 00:02:11.683 SYMLINK libspdk_ut_mock.so 00:02:11.683 SYMLINK libspdk_log.so 00:02:11.683 SYMLINK libspdk_ut.so 00:02:11.942 CC lib/util/base64.o 00:02:11.942 CC lib/util/cpuset.o 00:02:11.942 CC lib/util/bit_array.o 00:02:11.942 CC lib/util/crc16.o 00:02:11.942 CC lib/util/crc32.o 00:02:11.942 CC lib/util/crc32c.o 00:02:11.942 CC lib/util/crc32_ieee.o 00:02:11.942 CC lib/util/crc64.o 00:02:11.942 CC lib/util/dif.o 00:02:11.942 CC lib/util/fd.o 00:02:11.942 CC lib/util/fd_group.o 00:02:11.942 CC lib/util/file.o 00:02:11.942 CC lib/util/hexlify.o 00:02:11.942 CC lib/util/iov.o 00:02:11.942 CC lib/util/math.o 00:02:11.942 CC lib/util/net.o 00:02:11.942 CC lib/util/pipe.o 00:02:11.942 CC lib/ioat/ioat.o 00:02:11.942 CC lib/util/string.o 00:02:11.942 CC lib/util/strerror_tls.o 00:02:11.942 CC lib/util/uuid.o 00:02:11.942 CXX lib/trace_parser/trace.o 00:02:11.942 CC lib/util/xor.o 00:02:11.942 CC lib/util/zipf.o 00:02:11.942 CC lib/dma/dma.o 00:02:12.202 CC lib/vfio_user/host/vfio_user_pci.o 00:02:12.202 CC lib/vfio_user/host/vfio_user.o 00:02:12.202 LIB libspdk_dma.a 00:02:12.202 SO libspdk_dma.so.4.0 00:02:12.202 LIB libspdk_ioat.a 00:02:12.202 SYMLINK libspdk_dma.so 00:02:12.202 SO libspdk_ioat.so.7.0 00:02:12.461 SYMLINK libspdk_ioat.so 00:02:12.461 LIB libspdk_vfio_user.a 00:02:12.727 SO libspdk_vfio_user.so.5.0 00:02:12.727 SYMLINK libspdk_vfio_user.so 00:02:13.032 LIB libspdk_util.a 00:02:13.032 SO libspdk_util.so.10.0 00:02:13.607 SYMLINK libspdk_util.so 00:02:13.607 LIB libspdk_trace_parser.a 00:02:13.607 SO libspdk_trace_parser.so.5.0 00:02:13.607 CC lib/conf/conf.o 00:02:13.607 CC lib/vmd/vmd.o 00:02:13.607 CC lib/vmd/led.o 00:02:13.607 CC lib/rdma_utils/rdma_utils.o 00:02:13.607 CC lib/json/json_parse.o 00:02:13.607 CC lib/json/json_util.o 00:02:13.607 CC lib/json/json_write.o 00:02:13.607 CC lib/idxd/idxd.o 00:02:13.607 CC lib/idxd/idxd_user.o 00:02:13.607 CC lib/idxd/idxd_kernel.o 00:02:13.607 CC lib/rdma_provider/common.o 00:02:13.607 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:13.607 CC lib/env_dpdk/env.o 00:02:13.607 CC lib/env_dpdk/memory.o 00:02:13.607 CC lib/env_dpdk/pci.o 00:02:13.607 CC lib/env_dpdk/init.o 00:02:13.607 CC lib/env_dpdk/threads.o 00:02:13.607 CC lib/env_dpdk/pci_ioat.o 00:02:13.607 CC lib/env_dpdk/pci_virtio.o 00:02:13.607 CC lib/env_dpdk/pci_vmd.o 00:02:13.607 CC lib/env_dpdk/pci_idxd.o 00:02:13.607 CC lib/env_dpdk/pci_event.o 00:02:13.607 CC lib/env_dpdk/sigbus_handler.o 00:02:13.607 CC lib/env_dpdk/pci_dpdk.o 00:02:13.607 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:13.607 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:13.866 SYMLINK libspdk_trace_parser.so 00:02:13.866 LIB libspdk_rdma_provider.a 00:02:13.866 SO libspdk_rdma_provider.so.6.0 00:02:13.866 LIB libspdk_rdma_utils.a 00:02:13.866 SYMLINK libspdk_rdma_provider.so 00:02:13.866 LIB libspdk_conf.a 00:02:13.866 SO libspdk_rdma_utils.so.1.0 00:02:13.866 SO libspdk_conf.so.6.0 00:02:14.125 SYMLINK libspdk_rdma_utils.so 00:02:14.125 LIB libspdk_json.a 00:02:14.125 SYMLINK libspdk_conf.so 00:02:14.125 SO libspdk_json.so.6.0 00:02:14.125 SYMLINK libspdk_json.so 00:02:14.384 LIB libspdk_idxd.a 00:02:14.384 CC lib/jsonrpc/jsonrpc_server.o 00:02:14.384 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:14.384 CC lib/jsonrpc/jsonrpc_client.o 00:02:14.384 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:14.384 SO libspdk_idxd.so.12.0 00:02:14.384 SYMLINK libspdk_idxd.so 00:02:14.384 LIB libspdk_vmd.a 00:02:14.643 SO libspdk_vmd.so.6.0 00:02:14.643 SYMLINK libspdk_vmd.so 00:02:14.901 LIB libspdk_jsonrpc.a 00:02:14.901 SO libspdk_jsonrpc.so.6.0 00:02:14.901 SYMLINK libspdk_jsonrpc.so 00:02:15.159 CC lib/rpc/rpc.o 00:02:15.725 LIB libspdk_rpc.a 00:02:15.725 SO libspdk_rpc.so.6.0 00:02:15.984 SYMLINK libspdk_rpc.so 00:02:16.243 CC lib/trace/trace.o 00:02:16.243 CC lib/trace/trace_flags.o 00:02:16.243 CC lib/trace/trace_rpc.o 00:02:16.243 CC lib/notify/notify.o 00:02:16.243 CC lib/notify/notify_rpc.o 00:02:16.243 CC lib/keyring/keyring.o 00:02:16.243 CC lib/keyring/keyring_rpc.o 00:02:16.502 LIB libspdk_notify.a 00:02:16.502 LIB libspdk_keyring.a 00:02:16.502 SO libspdk_notify.so.6.0 00:02:16.502 SO libspdk_keyring.so.1.0 00:02:16.502 SYMLINK libspdk_keyring.so 00:02:16.502 SYMLINK libspdk_notify.so 00:02:16.502 LIB libspdk_trace.a 00:02:16.762 SO libspdk_trace.so.10.0 00:02:16.762 SYMLINK libspdk_trace.so 00:02:16.762 LIB libspdk_env_dpdk.a 00:02:17.020 SO libspdk_env_dpdk.so.15.0 00:02:17.020 CC lib/sock/sock.o 00:02:17.020 CC lib/sock/sock_rpc.o 00:02:17.020 CC lib/thread/thread.o 00:02:17.020 CC lib/thread/iobuf.o 00:02:17.279 SYMLINK libspdk_env_dpdk.so 00:02:17.847 LIB libspdk_sock.a 00:02:17.847 SO libspdk_sock.so.10.0 00:02:17.847 SYMLINK libspdk_sock.so 00:02:18.415 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:18.415 CC lib/nvme/nvme_ctrlr.o 00:02:18.415 CC lib/nvme/nvme_fabric.o 00:02:18.415 CC lib/nvme/nvme_ns_cmd.o 00:02:18.415 CC lib/nvme/nvme_ns.o 00:02:18.415 CC lib/nvme/nvme_pcie_common.o 00:02:18.415 CC lib/nvme/nvme_pcie.o 00:02:18.415 CC lib/nvme/nvme_qpair.o 00:02:18.415 CC lib/nvme/nvme.o 00:02:18.415 CC lib/nvme/nvme_quirks.o 00:02:18.415 CC lib/nvme/nvme_transport.o 00:02:18.415 CC lib/nvme/nvme_discovery.o 00:02:18.415 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:18.415 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:18.415 CC lib/nvme/nvme_tcp.o 00:02:18.415 CC lib/nvme/nvme_opal.o 00:02:18.415 CC lib/nvme/nvme_io_msg.o 00:02:18.415 CC lib/nvme/nvme_poll_group.o 00:02:18.415 CC lib/nvme/nvme_zns.o 00:02:18.415 CC lib/nvme/nvme_stubs.o 00:02:18.415 CC lib/nvme/nvme_auth.o 00:02:18.415 CC lib/nvme/nvme_cuse.o 00:02:18.415 CC lib/nvme/nvme_vfio_user.o 00:02:18.415 CC lib/nvme/nvme_rdma.o 00:02:19.352 LIB libspdk_thread.a 00:02:19.352 SO libspdk_thread.so.10.1 00:02:19.352 SYMLINK libspdk_thread.so 00:02:19.611 CC lib/init/json_config.o 00:02:19.611 CC lib/vfu_tgt/tgt_endpoint.o 00:02:19.611 CC lib/init/subsystem.o 00:02:19.611 CC lib/vfu_tgt/tgt_rpc.o 00:02:19.611 CC lib/init/rpc.o 00:02:19.611 CC lib/init/subsystem_rpc.o 00:02:19.611 CC lib/accel/accel.o 00:02:19.611 CC lib/virtio/virtio.o 00:02:19.612 CC lib/virtio/virtio_vhost_user.o 00:02:19.612 CC lib/virtio/virtio_vfio_user.o 00:02:19.612 CC lib/accel/accel_rpc.o 00:02:19.612 CC lib/accel/accel_sw.o 00:02:19.612 CC lib/virtio/virtio_pci.o 00:02:19.612 CC lib/blob/blobstore.o 00:02:19.612 CC lib/blob/request.o 00:02:19.612 CC lib/blob/zeroes.o 00:02:19.612 CC lib/blob/blob_bs_dev.o 00:02:19.869 LIB libspdk_init.a 00:02:19.869 SO libspdk_init.so.5.0 00:02:19.869 SYMLINK libspdk_init.so 00:02:19.869 LIB libspdk_virtio.a 00:02:19.869 LIB libspdk_vfu_tgt.a 00:02:19.869 SO libspdk_vfu_tgt.so.3.0 00:02:19.869 SO libspdk_virtio.so.7.0 00:02:20.128 SYMLINK libspdk_vfu_tgt.so 00:02:20.128 SYMLINK libspdk_virtio.so 00:02:20.128 CC lib/event/app.o 00:02:20.128 CC lib/event/log_rpc.o 00:02:20.128 CC lib/event/reactor.o 00:02:20.128 CC lib/event/scheduler_static.o 00:02:20.128 CC lib/event/app_rpc.o 00:02:20.694 LIB libspdk_accel.a 00:02:20.694 SO libspdk_accel.so.16.0 00:02:20.694 SYMLINK libspdk_accel.so 00:02:20.694 LIB libspdk_event.a 00:02:20.694 CC lib/bdev/bdev_rpc.o 00:02:20.694 CC lib/bdev/bdev.o 00:02:20.694 CC lib/bdev/bdev_zone.o 00:02:20.694 CC lib/bdev/part.o 00:02:20.694 CC lib/bdev/scsi_nvme.o 00:02:20.953 SO libspdk_event.so.14.0 00:02:20.953 SYMLINK libspdk_event.so 00:02:22.329 LIB libspdk_nvme.a 00:02:22.329 SO libspdk_nvme.so.13.1 00:02:22.588 SYMLINK libspdk_nvme.so 00:02:22.588 LIB libspdk_blob.a 00:02:22.588 SO libspdk_blob.so.11.0 00:02:22.847 SYMLINK libspdk_blob.so 00:02:23.106 CC lib/lvol/lvol.o 00:02:23.106 CC lib/blobfs/blobfs.o 00:02:23.106 CC lib/blobfs/tree.o 00:02:24.042 LIB libspdk_bdev.a 00:02:24.042 SO libspdk_bdev.so.16.0 00:02:24.042 SYMLINK libspdk_bdev.so 00:02:24.308 CC lib/scsi/dev.o 00:02:24.308 CC lib/scsi/lun.o 00:02:24.308 CC lib/scsi/port.o 00:02:24.308 CC lib/scsi/scsi.o 00:02:24.308 CC lib/scsi/scsi_bdev.o 00:02:24.308 CC lib/ftl/ftl_core.o 00:02:24.308 CC lib/ftl/ftl_init.o 00:02:24.308 CC lib/scsi/scsi_pr.o 00:02:24.308 CC lib/ftl/ftl_layout.o 00:02:24.308 CC lib/scsi/scsi_rpc.o 00:02:24.308 CC lib/scsi/task.o 00:02:24.308 CC lib/ftl/ftl_debug.o 00:02:24.308 CC lib/ftl/ftl_io.o 00:02:24.308 CC lib/ftl/ftl_sb.o 00:02:24.308 CC lib/ftl/ftl_l2p.o 00:02:24.308 CC lib/ftl/ftl_l2p_flat.o 00:02:24.308 CC lib/ftl/ftl_nv_cache.o 00:02:24.308 CC lib/ftl/ftl_band.o 00:02:24.308 CC lib/ftl/ftl_band_ops.o 00:02:24.308 CC lib/ftl/ftl_writer.o 00:02:24.308 CC lib/nbd/nbd.o 00:02:24.308 CC lib/ftl/ftl_l2p_cache.o 00:02:24.308 CC lib/ftl/ftl_reloc.o 00:02:24.308 CC lib/ftl/ftl_rq.o 00:02:24.308 CC lib/nbd/nbd_rpc.o 00:02:24.308 CC lib/ftl/ftl_p2l.o 00:02:24.308 CC lib/ublk/ublk.o 00:02:24.308 CC lib/ftl/mngt/ftl_mngt.o 00:02:24.308 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:24.308 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:24.308 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:24.308 CC lib/ublk/ublk_rpc.o 00:02:24.308 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:24.308 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:24.308 CC lib/nvmf/ctrlr.o 00:02:24.308 CC lib/nvmf/ctrlr_discovery.o 00:02:24.308 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:24.308 CC lib/nvmf/ctrlr_bdev.o 00:02:24.308 CC lib/nvmf/subsystem.o 00:02:24.308 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:24.308 CC lib/nvmf/nvmf.o 00:02:24.308 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:24.308 CC lib/nvmf/nvmf_rpc.o 00:02:24.308 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:24.308 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:24.308 CC lib/nvmf/transport.o 00:02:24.567 LIB libspdk_lvol.a 00:02:24.567 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:24.567 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:24.567 SO libspdk_lvol.so.10.0 00:02:24.567 CC lib/nvmf/tcp.o 00:02:24.567 LIB libspdk_blobfs.a 00:02:24.567 CC lib/ftl/utils/ftl_conf.o 00:02:24.567 CC lib/ftl/utils/ftl_md.o 00:02:24.567 CC lib/nvmf/stubs.o 00:02:24.567 CC lib/nvmf/mdns_server.o 00:02:24.567 CC lib/ftl/utils/ftl_mempool.o 00:02:24.567 SO libspdk_blobfs.so.10.0 00:02:24.567 CC lib/nvmf/vfio_user.o 00:02:24.567 SYMLINK libspdk_lvol.so 00:02:24.567 CC lib/nvmf/rdma.o 00:02:24.567 CC lib/ftl/utils/ftl_bitmap.o 00:02:24.827 CC lib/nvmf/auth.o 00:02:24.827 CC lib/ftl/utils/ftl_property.o 00:02:24.827 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:24.827 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:24.827 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:24.827 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:24.827 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:24.827 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:24.827 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:24.827 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:24.827 SYMLINK libspdk_blobfs.so 00:02:24.827 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:24.827 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:24.827 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:24.827 CC lib/ftl/base/ftl_base_dev.o 00:02:24.827 CC lib/ftl/base/ftl_base_bdev.o 00:02:24.827 CC lib/ftl/ftl_trace.o 00:02:25.087 LIB libspdk_nbd.a 00:02:25.087 LIB libspdk_scsi.a 00:02:25.087 SO libspdk_nbd.so.7.0 00:02:25.087 SO libspdk_scsi.so.9.0 00:02:25.347 SYMLINK libspdk_nbd.so 00:02:25.347 SYMLINK libspdk_scsi.so 00:02:25.347 LIB libspdk_ublk.a 00:02:25.347 SO libspdk_ublk.so.3.0 00:02:25.606 CC lib/iscsi/conn.o 00:02:25.606 CC lib/iscsi/init_grp.o 00:02:25.606 CC lib/iscsi/iscsi.o 00:02:25.606 CC lib/iscsi/md5.o 00:02:25.606 CC lib/iscsi/param.o 00:02:25.606 CC lib/iscsi/portal_grp.o 00:02:25.606 CC lib/iscsi/tgt_node.o 00:02:25.606 CC lib/iscsi/iscsi_subsystem.o 00:02:25.606 CC lib/iscsi/iscsi_rpc.o 00:02:25.606 CC lib/iscsi/task.o 00:02:25.606 CC lib/vhost/vhost.o 00:02:25.606 CC lib/vhost/vhost_rpc.o 00:02:25.606 CC lib/vhost/vhost_scsi.o 00:02:25.606 CC lib/vhost/vhost_blk.o 00:02:25.606 CC lib/vhost/rte_vhost_user.o 00:02:25.606 SYMLINK libspdk_ublk.so 00:02:25.865 LIB libspdk_ftl.a 00:02:26.123 SO libspdk_ftl.so.9.0 00:02:26.382 SYMLINK libspdk_ftl.so 00:02:27.348 LIB libspdk_iscsi.a 00:02:27.348 SO libspdk_iscsi.so.8.0 00:02:27.348 LIB libspdk_vhost.a 00:02:27.348 SO libspdk_vhost.so.8.0 00:02:27.348 LIB libspdk_nvmf.a 00:02:27.348 SYMLINK libspdk_vhost.so 00:02:27.348 SO libspdk_nvmf.so.19.0 00:02:27.607 SYMLINK libspdk_iscsi.so 00:02:27.866 SYMLINK libspdk_nvmf.so 00:02:28.126 CC module/vfu_device/vfu_virtio.o 00:02:28.126 CC module/vfu_device/vfu_virtio_blk.o 00:02:28.126 CC module/vfu_device/vfu_virtio_scsi.o 00:02:28.126 CC module/vfu_device/vfu_virtio_rpc.o 00:02:28.126 CC module/env_dpdk/env_dpdk_rpc.o 00:02:28.126 CC module/accel/error/accel_error.o 00:02:28.126 CC module/accel/error/accel_error_rpc.o 00:02:28.126 CC module/sock/posix/posix.o 00:02:28.126 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:28.126 CC module/accel/iaa/accel_iaa.o 00:02:28.126 CC module/accel/iaa/accel_iaa_rpc.o 00:02:28.126 CC module/accel/ioat/accel_ioat.o 00:02:28.126 CC module/accel/dsa/accel_dsa.o 00:02:28.126 CC module/accel/ioat/accel_ioat_rpc.o 00:02:28.126 CC module/accel/dsa/accel_dsa_rpc.o 00:02:28.126 CC module/blob/bdev/blob_bdev.o 00:02:28.126 CC module/scheduler/gscheduler/gscheduler.o 00:02:28.126 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:28.126 CC module/keyring/file/keyring.o 00:02:28.126 CC module/keyring/file/keyring_rpc.o 00:02:28.126 CC module/keyring/linux/keyring.o 00:02:28.126 CC module/keyring/linux/keyring_rpc.o 00:02:28.385 LIB libspdk_env_dpdk_rpc.a 00:02:28.385 SO libspdk_env_dpdk_rpc.so.6.0 00:02:28.385 LIB libspdk_keyring_linux.a 00:02:28.385 LIB libspdk_scheduler_dpdk_governor.a 00:02:28.385 SYMLINK libspdk_env_dpdk_rpc.so 00:02:28.385 LIB libspdk_accel_error.a 00:02:28.385 SO libspdk_keyring_linux.so.1.0 00:02:28.385 LIB libspdk_keyring_file.a 00:02:28.385 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:28.385 LIB libspdk_accel_ioat.a 00:02:28.385 LIB libspdk_scheduler_gscheduler.a 00:02:28.385 SO libspdk_accel_error.so.2.0 00:02:28.385 LIB libspdk_scheduler_dynamic.a 00:02:28.385 SO libspdk_keyring_file.so.1.0 00:02:28.385 SO libspdk_scheduler_gscheduler.so.4.0 00:02:28.385 SO libspdk_accel_ioat.so.6.0 00:02:28.385 SO libspdk_scheduler_dynamic.so.4.0 00:02:28.385 SYMLINK libspdk_keyring_linux.so 00:02:28.385 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:28.385 SYMLINK libspdk_accel_error.so 00:02:28.385 SYMLINK libspdk_scheduler_gscheduler.so 00:02:28.385 SYMLINK libspdk_scheduler_dynamic.so 00:02:28.385 SYMLINK libspdk_accel_ioat.so 00:02:28.385 LIB libspdk_accel_iaa.a 00:02:28.385 SYMLINK libspdk_keyring_file.so 00:02:28.644 SO libspdk_accel_iaa.so.3.0 00:02:28.644 SYMLINK libspdk_accel_iaa.so 00:02:28.644 LIB libspdk_accel_dsa.a 00:02:28.644 LIB libspdk_blob_bdev.a 00:02:28.644 SO libspdk_accel_dsa.so.5.0 00:02:28.644 SO libspdk_blob_bdev.so.11.0 00:02:28.644 SYMLINK libspdk_accel_dsa.so 00:02:28.644 SYMLINK libspdk_blob_bdev.so 00:02:28.903 LIB libspdk_vfu_device.a 00:02:28.903 SO libspdk_vfu_device.so.3.0 00:02:28.903 SYMLINK libspdk_vfu_device.so 00:02:28.903 LIB libspdk_sock_posix.a 00:02:29.163 SO libspdk_sock_posix.so.6.0 00:02:29.163 CC module/blobfs/bdev/blobfs_bdev.o 00:02:29.163 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:29.163 CC module/bdev/passthru/vbdev_passthru.o 00:02:29.163 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:29.163 CC module/bdev/lvol/vbdev_lvol.o 00:02:29.163 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:29.163 CC module/bdev/delay/vbdev_delay.o 00:02:29.163 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:29.163 CC module/bdev/raid/bdev_raid.o 00:02:29.163 CC module/bdev/raid/bdev_raid_rpc.o 00:02:29.163 CC module/bdev/aio/bdev_aio.o 00:02:29.163 CC module/bdev/error/vbdev_error.o 00:02:29.163 CC module/bdev/raid/bdev_raid_sb.o 00:02:29.163 CC module/bdev/error/vbdev_error_rpc.o 00:02:29.163 CC module/bdev/raid/raid0.o 00:02:29.163 CC module/bdev/aio/bdev_aio_rpc.o 00:02:29.163 CC module/bdev/gpt/gpt.o 00:02:29.163 CC module/bdev/gpt/vbdev_gpt.o 00:02:29.163 CC module/bdev/malloc/bdev_malloc.o 00:02:29.163 CC module/bdev/raid/raid1.o 00:02:29.163 CC module/bdev/raid/concat.o 00:02:29.163 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:29.163 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:29.163 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:29.163 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:29.163 CC module/bdev/null/bdev_null.o 00:02:29.163 CC module/bdev/null/bdev_null_rpc.o 00:02:29.163 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:29.163 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:29.163 CC module/bdev/iscsi/bdev_iscsi.o 00:02:29.163 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:29.163 CC module/bdev/ftl/bdev_ftl.o 00:02:29.163 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:29.163 CC module/bdev/split/vbdev_split.o 00:02:29.163 CC module/bdev/nvme/bdev_nvme.o 00:02:29.163 CC module/bdev/split/vbdev_split_rpc.o 00:02:29.163 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:29.163 CC module/bdev/nvme/nvme_rpc.o 00:02:29.163 CC module/bdev/nvme/bdev_mdns_client.o 00:02:29.163 CC module/bdev/nvme/vbdev_opal.o 00:02:29.163 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:29.163 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:29.163 SYMLINK libspdk_sock_posix.so 00:02:29.422 LIB libspdk_blobfs_bdev.a 00:02:29.422 SO libspdk_blobfs_bdev.so.6.0 00:02:29.422 LIB libspdk_bdev_split.a 00:02:29.422 SYMLINK libspdk_blobfs_bdev.so 00:02:29.422 LIB libspdk_bdev_zone_block.a 00:02:29.422 SO libspdk_bdev_split.so.6.0 00:02:29.422 LIB libspdk_bdev_gpt.a 00:02:29.422 SO libspdk_bdev_zone_block.so.6.0 00:02:29.681 SO libspdk_bdev_gpt.so.6.0 00:02:29.681 LIB libspdk_bdev_ftl.a 00:02:29.681 SYMLINK libspdk_bdev_split.so 00:02:29.681 LIB libspdk_bdev_null.a 00:02:29.681 SO libspdk_bdev_ftl.so.6.0 00:02:29.681 SYMLINK libspdk_bdev_zone_block.so 00:02:29.681 SO libspdk_bdev_null.so.6.0 00:02:29.681 SYMLINK libspdk_bdev_gpt.so 00:02:29.681 SYMLINK libspdk_bdev_ftl.so 00:02:29.681 LIB libspdk_bdev_error.a 00:02:29.681 SYMLINK libspdk_bdev_null.so 00:02:29.681 SO libspdk_bdev_error.so.6.0 00:02:29.681 LIB libspdk_bdev_iscsi.a 00:02:29.681 SO libspdk_bdev_iscsi.so.6.0 00:02:29.681 LIB libspdk_bdev_passthru.a 00:02:29.681 LIB libspdk_bdev_malloc.a 00:02:29.681 LIB libspdk_bdev_delay.a 00:02:29.681 SYMLINK libspdk_bdev_error.so 00:02:29.681 SO libspdk_bdev_passthru.so.6.0 00:02:29.681 SO libspdk_bdev_malloc.so.6.0 00:02:29.681 SO libspdk_bdev_delay.so.6.0 00:02:29.681 LIB libspdk_bdev_aio.a 00:02:29.681 SYMLINK libspdk_bdev_iscsi.so 00:02:29.681 LIB libspdk_bdev_virtio.a 00:02:29.939 SO libspdk_bdev_aio.so.6.0 00:02:29.939 SYMLINK libspdk_bdev_passthru.so 00:02:29.939 SO libspdk_bdev_virtio.so.6.0 00:02:29.939 SYMLINK libspdk_bdev_malloc.so 00:02:29.939 SYMLINK libspdk_bdev_delay.so 00:02:29.939 SYMLINK libspdk_bdev_aio.so 00:02:29.939 SYMLINK libspdk_bdev_virtio.so 00:02:30.197 LIB libspdk_bdev_lvol.a 00:02:30.197 SO libspdk_bdev_lvol.so.6.0 00:02:30.197 SYMLINK libspdk_bdev_lvol.so 00:02:30.455 LIB libspdk_bdev_raid.a 00:02:30.455 SO libspdk_bdev_raid.so.6.0 00:02:30.714 SYMLINK libspdk_bdev_raid.so 00:02:34.902 LIB libspdk_bdev_nvme.a 00:02:34.902 SO libspdk_bdev_nvme.so.7.0 00:02:34.902 SYMLINK libspdk_bdev_nvme.so 00:02:34.902 CC module/event/subsystems/sock/sock.o 00:02:34.902 CC module/event/subsystems/vmd/vmd.o 00:02:34.902 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:34.902 CC module/event/subsystems/iobuf/iobuf.o 00:02:34.902 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:34.902 CC module/event/subsystems/keyring/keyring.o 00:02:34.902 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:34.902 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:34.902 CC module/event/subsystems/scheduler/scheduler.o 00:02:35.161 LIB libspdk_event_keyring.a 00:02:35.161 LIB libspdk_event_vhost_blk.a 00:02:35.161 SO libspdk_event_keyring.so.1.0 00:02:35.161 LIB libspdk_event_scheduler.a 00:02:35.161 SO libspdk_event_vhost_blk.so.3.0 00:02:35.161 LIB libspdk_event_sock.a 00:02:35.161 LIB libspdk_event_vmd.a 00:02:35.161 SO libspdk_event_scheduler.so.4.0 00:02:35.161 LIB libspdk_event_vfu_tgt.a 00:02:35.161 LIB libspdk_event_iobuf.a 00:02:35.161 SO libspdk_event_sock.so.5.0 00:02:35.161 SO libspdk_event_vmd.so.6.0 00:02:35.161 SYMLINK libspdk_event_vhost_blk.so 00:02:35.161 SO libspdk_event_vfu_tgt.so.3.0 00:02:35.161 SYMLINK libspdk_event_keyring.so 00:02:35.161 SO libspdk_event_iobuf.so.3.0 00:02:35.161 SYMLINK libspdk_event_scheduler.so 00:02:35.161 SYMLINK libspdk_event_vmd.so 00:02:35.161 SYMLINK libspdk_event_vfu_tgt.so 00:02:35.161 SYMLINK libspdk_event_sock.so 00:02:35.161 SYMLINK libspdk_event_iobuf.so 00:02:35.728 CC module/event/subsystems/accel/accel.o 00:02:35.728 LIB libspdk_event_accel.a 00:02:35.728 SO libspdk_event_accel.so.6.0 00:02:35.987 SYMLINK libspdk_event_accel.so 00:02:36.246 CC module/event/subsystems/bdev/bdev.o 00:02:36.505 LIB libspdk_event_bdev.a 00:02:36.505 SO libspdk_event_bdev.so.6.0 00:02:36.764 SYMLINK libspdk_event_bdev.so 00:02:37.023 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:37.023 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:37.023 CC module/event/subsystems/nbd/nbd.o 00:02:37.023 CC module/event/subsystems/scsi/scsi.o 00:02:37.023 CC module/event/subsystems/ublk/ublk.o 00:02:37.023 LIB libspdk_event_nbd.a 00:02:37.023 LIB libspdk_event_ublk.a 00:02:37.023 LIB libspdk_event_scsi.a 00:02:37.023 SO libspdk_event_nbd.so.6.0 00:02:37.281 SO libspdk_event_ublk.so.3.0 00:02:37.281 SO libspdk_event_scsi.so.6.0 00:02:37.281 SYMLINK libspdk_event_nbd.so 00:02:37.281 LIB libspdk_event_nvmf.a 00:02:37.281 SYMLINK libspdk_event_ublk.so 00:02:37.281 SO libspdk_event_nvmf.so.6.0 00:02:37.281 SYMLINK libspdk_event_scsi.so 00:02:37.281 SYMLINK libspdk_event_nvmf.so 00:02:37.540 CC module/event/subsystems/iscsi/iscsi.o 00:02:37.540 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:37.800 LIB libspdk_event_iscsi.a 00:02:37.800 SO libspdk_event_iscsi.so.6.0 00:02:37.800 LIB libspdk_event_vhost_scsi.a 00:02:37.800 SYMLINK libspdk_event_iscsi.so 00:02:37.800 SO libspdk_event_vhost_scsi.so.3.0 00:02:38.057 SYMLINK libspdk_event_vhost_scsi.so 00:02:38.057 SO libspdk.so.6.0 00:02:38.057 SYMLINK libspdk.so 00:02:38.317 CXX app/trace/trace.o 00:02:38.317 CC app/trace_record/trace_record.o 00:02:38.317 CC app/spdk_lspci/spdk_lspci.o 00:02:38.317 CC test/rpc_client/rpc_client_test.o 00:02:38.317 CC app/spdk_nvme_identify/identify.o 00:02:38.317 CC app/spdk_top/spdk_top.o 00:02:38.317 CC app/spdk_nvme_discover/discovery_aer.o 00:02:38.317 TEST_HEADER include/spdk/accel.h 00:02:38.317 TEST_HEADER include/spdk/accel_module.h 00:02:38.317 TEST_HEADER include/spdk/assert.h 00:02:38.317 TEST_HEADER include/spdk/barrier.h 00:02:38.317 TEST_HEADER include/spdk/base64.h 00:02:38.317 TEST_HEADER include/spdk/bdev.h 00:02:38.317 TEST_HEADER include/spdk/bdev_module.h 00:02:38.317 TEST_HEADER include/spdk/bdev_zone.h 00:02:38.317 TEST_HEADER include/spdk/bit_array.h 00:02:38.317 TEST_HEADER include/spdk/bit_pool.h 00:02:38.317 TEST_HEADER include/spdk/blob_bdev.h 00:02:38.317 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:38.317 TEST_HEADER include/spdk/blobfs.h 00:02:38.317 CC app/spdk_nvme_perf/perf.o 00:02:38.317 TEST_HEADER include/spdk/blob.h 00:02:38.317 TEST_HEADER include/spdk/conf.h 00:02:38.317 TEST_HEADER include/spdk/config.h 00:02:38.317 TEST_HEADER include/spdk/cpuset.h 00:02:38.317 TEST_HEADER include/spdk/crc16.h 00:02:38.317 TEST_HEADER include/spdk/crc32.h 00:02:38.317 TEST_HEADER include/spdk/crc64.h 00:02:38.317 TEST_HEADER include/spdk/dif.h 00:02:38.317 TEST_HEADER include/spdk/dma.h 00:02:38.317 TEST_HEADER include/spdk/endian.h 00:02:38.317 TEST_HEADER include/spdk/env_dpdk.h 00:02:38.317 TEST_HEADER include/spdk/env.h 00:02:38.317 TEST_HEADER include/spdk/event.h 00:02:38.317 TEST_HEADER include/spdk/fd_group.h 00:02:38.317 TEST_HEADER include/spdk/fd.h 00:02:38.317 TEST_HEADER include/spdk/file.h 00:02:38.317 TEST_HEADER include/spdk/ftl.h 00:02:38.317 TEST_HEADER include/spdk/gpt_spec.h 00:02:38.317 TEST_HEADER include/spdk/hexlify.h 00:02:38.317 TEST_HEADER include/spdk/idxd.h 00:02:38.317 TEST_HEADER include/spdk/histogram_data.h 00:02:38.317 TEST_HEADER include/spdk/idxd_spec.h 00:02:38.317 TEST_HEADER include/spdk/init.h 00:02:38.317 TEST_HEADER include/spdk/ioat.h 00:02:38.317 TEST_HEADER include/spdk/ioat_spec.h 00:02:38.317 TEST_HEADER include/spdk/iscsi_spec.h 00:02:38.317 TEST_HEADER include/spdk/json.h 00:02:38.317 TEST_HEADER include/spdk/jsonrpc.h 00:02:38.317 TEST_HEADER include/spdk/keyring.h 00:02:38.317 TEST_HEADER include/spdk/keyring_module.h 00:02:38.317 TEST_HEADER include/spdk/likely.h 00:02:38.317 TEST_HEADER include/spdk/lvol.h 00:02:38.317 TEST_HEADER include/spdk/log.h 00:02:38.317 TEST_HEADER include/spdk/memory.h 00:02:38.317 TEST_HEADER include/spdk/mmio.h 00:02:38.317 TEST_HEADER include/spdk/nbd.h 00:02:38.317 TEST_HEADER include/spdk/net.h 00:02:38.317 TEST_HEADER include/spdk/notify.h 00:02:38.317 TEST_HEADER include/spdk/nvme.h 00:02:38.317 TEST_HEADER include/spdk/nvme_intel.h 00:02:38.317 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:38.317 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:38.317 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:38.317 TEST_HEADER include/spdk/nvme_spec.h 00:02:38.317 TEST_HEADER include/spdk/nvme_zns.h 00:02:38.317 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:38.317 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:38.317 TEST_HEADER include/spdk/nvmf.h 00:02:38.317 TEST_HEADER include/spdk/nvmf_spec.h 00:02:38.317 TEST_HEADER include/spdk/nvmf_transport.h 00:02:38.317 TEST_HEADER include/spdk/opal.h 00:02:38.317 TEST_HEADER include/spdk/opal_spec.h 00:02:38.317 TEST_HEADER include/spdk/pipe.h 00:02:38.317 TEST_HEADER include/spdk/pci_ids.h 00:02:38.317 TEST_HEADER include/spdk/queue.h 00:02:38.317 TEST_HEADER include/spdk/reduce.h 00:02:38.317 TEST_HEADER include/spdk/rpc.h 00:02:38.317 TEST_HEADER include/spdk/scheduler.h 00:02:38.317 TEST_HEADER include/spdk/scsi.h 00:02:38.317 TEST_HEADER include/spdk/scsi_spec.h 00:02:38.317 TEST_HEADER include/spdk/sock.h 00:02:38.317 TEST_HEADER include/spdk/string.h 00:02:38.317 TEST_HEADER include/spdk/stdinc.h 00:02:38.317 TEST_HEADER include/spdk/thread.h 00:02:38.317 TEST_HEADER include/spdk/trace.h 00:02:38.317 TEST_HEADER include/spdk/tree.h 00:02:38.317 TEST_HEADER include/spdk/trace_parser.h 00:02:38.317 TEST_HEADER include/spdk/ublk.h 00:02:38.317 TEST_HEADER include/spdk/util.h 00:02:38.317 TEST_HEADER include/spdk/uuid.h 00:02:38.317 TEST_HEADER include/spdk/version.h 00:02:38.317 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:38.317 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:38.317 TEST_HEADER include/spdk/vhost.h 00:02:38.317 TEST_HEADER include/spdk/vmd.h 00:02:38.317 TEST_HEADER include/spdk/xor.h 00:02:38.317 TEST_HEADER include/spdk/zipf.h 00:02:38.317 CXX test/cpp_headers/accel_module.o 00:02:38.317 CXX test/cpp_headers/accel.o 00:02:38.317 CXX test/cpp_headers/assert.o 00:02:38.317 CXX test/cpp_headers/barrier.o 00:02:38.317 CXX test/cpp_headers/bdev.o 00:02:38.317 CXX test/cpp_headers/base64.o 00:02:38.317 CXX test/cpp_headers/bdev_module.o 00:02:38.582 CXX test/cpp_headers/bdev_zone.o 00:02:38.582 CXX test/cpp_headers/bit_array.o 00:02:38.582 CXX test/cpp_headers/bit_pool.o 00:02:38.582 CXX test/cpp_headers/blob_bdev.o 00:02:38.582 CXX test/cpp_headers/blobfs_bdev.o 00:02:38.582 CC app/spdk_dd/spdk_dd.o 00:02:38.582 CXX test/cpp_headers/blobfs.o 00:02:38.582 CXX test/cpp_headers/blob.o 00:02:38.582 CXX test/cpp_headers/conf.o 00:02:38.582 CXX test/cpp_headers/config.o 00:02:38.582 CXX test/cpp_headers/cpuset.o 00:02:38.582 CXX test/cpp_headers/crc16.o 00:02:38.582 CC app/nvmf_tgt/nvmf_main.o 00:02:38.582 CC app/iscsi_tgt/iscsi_tgt.o 00:02:38.582 CC app/spdk_tgt/spdk_tgt.o 00:02:38.582 CC examples/ioat/verify/verify.o 00:02:38.582 CC examples/util/zipf/zipf.o 00:02:38.582 CC examples/ioat/perf/perf.o 00:02:38.582 CXX test/cpp_headers/crc32.o 00:02:38.582 CC test/app/stub/stub.o 00:02:38.582 CC test/app/histogram_perf/histogram_perf.o 00:02:38.582 CC test/thread/poller_perf/poller_perf.o 00:02:38.582 CC test/app/jsoncat/jsoncat.o 00:02:38.582 CC test/env/vtophys/vtophys.o 00:02:38.582 CC app/fio/nvme/fio_plugin.o 00:02:38.582 CC test/env/pci/pci_ut.o 00:02:38.582 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:38.582 CC test/env/memory/memory_ut.o 00:02:38.582 CC test/dma/test_dma/test_dma.o 00:02:38.582 CC app/fio/bdev/fio_plugin.o 00:02:38.582 CC test/app/bdev_svc/bdev_svc.o 00:02:38.582 LINK spdk_lspci 00:02:38.843 CC test/env/mem_callbacks/mem_callbacks.o 00:02:38.843 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:38.843 LINK rpc_client_test 00:02:38.843 LINK interrupt_tgt 00:02:38.843 LINK jsoncat 00:02:38.843 LINK poller_perf 00:02:38.843 LINK spdk_trace_record 00:02:38.843 LINK spdk_nvme_discover 00:02:38.843 LINK nvmf_tgt 00:02:38.843 CXX test/cpp_headers/crc64.o 00:02:38.843 CXX test/cpp_headers/dif.o 00:02:38.843 LINK histogram_perf 00:02:38.843 CXX test/cpp_headers/dma.o 00:02:38.843 CXX test/cpp_headers/endian.o 00:02:38.843 CXX test/cpp_headers/env_dpdk.o 00:02:38.843 CXX test/cpp_headers/env.o 00:02:38.843 LINK vtophys 00:02:39.109 CXX test/cpp_headers/event.o 00:02:39.109 LINK zipf 00:02:39.109 CXX test/cpp_headers/fd_group.o 00:02:39.109 LINK spdk_tgt 00:02:39.109 LINK stub 00:02:39.109 CXX test/cpp_headers/fd.o 00:02:39.109 CXX test/cpp_headers/file.o 00:02:39.109 CXX test/cpp_headers/ftl.o 00:02:39.109 LINK iscsi_tgt 00:02:39.109 LINK env_dpdk_post_init 00:02:39.109 CXX test/cpp_headers/gpt_spec.o 00:02:39.109 CXX test/cpp_headers/hexlify.o 00:02:39.109 CXX test/cpp_headers/histogram_data.o 00:02:39.109 CXX test/cpp_headers/idxd.o 00:02:39.109 CXX test/cpp_headers/idxd_spec.o 00:02:39.109 LINK verify 00:02:39.109 LINK ioat_perf 00:02:39.109 CXX test/cpp_headers/init.o 00:02:39.109 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:39.109 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:39.109 LINK bdev_svc 00:02:39.109 CXX test/cpp_headers/ioat.o 00:02:39.109 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:39.109 CXX test/cpp_headers/ioat_spec.o 00:02:39.109 CXX test/cpp_headers/iscsi_spec.o 00:02:39.371 CXX test/cpp_headers/json.o 00:02:39.371 CXX test/cpp_headers/jsonrpc.o 00:02:39.371 LINK spdk_dd 00:02:39.371 CXX test/cpp_headers/keyring.o 00:02:39.371 CXX test/cpp_headers/keyring_module.o 00:02:39.371 CXX test/cpp_headers/likely.o 00:02:39.371 CXX test/cpp_headers/log.o 00:02:39.371 LINK pci_ut 00:02:39.371 CXX test/cpp_headers/lvol.o 00:02:39.371 LINK test_dma 00:02:39.371 CXX test/cpp_headers/memory.o 00:02:39.371 CXX test/cpp_headers/mmio.o 00:02:39.371 LINK spdk_trace 00:02:39.371 CXX test/cpp_headers/nbd.o 00:02:39.371 CXX test/cpp_headers/net.o 00:02:39.371 CXX test/cpp_headers/notify.o 00:02:39.371 CXX test/cpp_headers/nvme.o 00:02:39.371 CXX test/cpp_headers/nvme_intel.o 00:02:39.648 CXX test/cpp_headers/nvme_ocssd.o 00:02:39.648 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:39.648 CXX test/cpp_headers/nvme_spec.o 00:02:39.648 CXX test/cpp_headers/nvme_zns.o 00:02:39.648 CXX test/cpp_headers/nvmf_cmd.o 00:02:39.648 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:39.648 CXX test/cpp_headers/nvmf.o 00:02:39.648 CXX test/cpp_headers/nvmf_spec.o 00:02:39.648 CXX test/cpp_headers/nvmf_transport.o 00:02:39.648 CXX test/cpp_headers/opal.o 00:02:39.648 CXX test/cpp_headers/opal_spec.o 00:02:39.648 CC test/event/event_perf/event_perf.o 00:02:39.648 CXX test/cpp_headers/pci_ids.o 00:02:39.648 CC test/event/reactor/reactor.o 00:02:39.648 CC test/event/reactor_perf/reactor_perf.o 00:02:39.648 LINK spdk_bdev 00:02:39.648 LINK spdk_nvme 00:02:39.648 CC test/event/app_repeat/app_repeat.o 00:02:39.648 LINK nvme_fuzz 00:02:39.648 CXX test/cpp_headers/pipe.o 00:02:39.648 CXX test/cpp_headers/queue.o 00:02:39.648 CXX test/cpp_headers/reduce.o 00:02:39.648 CC examples/sock/hello_world/hello_sock.o 00:02:39.648 CC examples/vmd/lsvmd/lsvmd.o 00:02:39.648 CXX test/cpp_headers/rpc.o 00:02:39.648 CC test/event/scheduler/scheduler.o 00:02:39.648 CC examples/vmd/led/led.o 00:02:39.910 CXX test/cpp_headers/scheduler.o 00:02:39.910 CXX test/cpp_headers/scsi.o 00:02:39.910 CC examples/idxd/perf/perf.o 00:02:39.910 CXX test/cpp_headers/scsi_spec.o 00:02:39.910 CC examples/thread/thread/thread_ex.o 00:02:39.910 CXX test/cpp_headers/sock.o 00:02:39.910 CXX test/cpp_headers/stdinc.o 00:02:39.910 CXX test/cpp_headers/string.o 00:02:39.910 CXX test/cpp_headers/thread.o 00:02:39.910 CXX test/cpp_headers/trace.o 00:02:39.910 CXX test/cpp_headers/trace_parser.o 00:02:39.910 LINK event_perf 00:02:39.910 LINK mem_callbacks 00:02:39.910 CXX test/cpp_headers/tree.o 00:02:39.910 CXX test/cpp_headers/ublk.o 00:02:39.910 LINK spdk_nvme_perf 00:02:39.910 CXX test/cpp_headers/util.o 00:02:39.910 LINK vhost_fuzz 00:02:39.910 LINK reactor 00:02:39.910 CXX test/cpp_headers/uuid.o 00:02:39.910 LINK spdk_nvme_identify 00:02:39.910 CXX test/cpp_headers/version.o 00:02:39.910 CXX test/cpp_headers/vfio_user_pci.o 00:02:39.910 LINK app_repeat 00:02:40.175 LINK reactor_perf 00:02:40.175 CXX test/cpp_headers/vfio_user_spec.o 00:02:40.175 CXX test/cpp_headers/vhost.o 00:02:40.175 LINK spdk_top 00:02:40.175 LINK lsvmd 00:02:40.175 CXX test/cpp_headers/vmd.o 00:02:40.175 CXX test/cpp_headers/xor.o 00:02:40.175 CXX test/cpp_headers/zipf.o 00:02:40.175 LINK led 00:02:40.175 CC app/vhost/vhost.o 00:02:40.175 CC test/nvme/sgl/sgl.o 00:02:40.175 CC test/nvme/reset/reset.o 00:02:40.175 CC test/nvme/aer/aer.o 00:02:40.175 CC test/accel/dif/dif.o 00:02:40.433 CC test/nvme/startup/startup.o 00:02:40.433 CC test/nvme/err_injection/err_injection.o 00:02:40.433 CC test/nvme/e2edp/nvme_dp.o 00:02:40.433 CC test/nvme/reserve/reserve.o 00:02:40.433 CC test/nvme/overhead/overhead.o 00:02:40.433 LINK scheduler 00:02:40.433 LINK thread 00:02:40.433 LINK hello_sock 00:02:40.433 CC test/blobfs/mkfs/mkfs.o 00:02:40.433 CC test/nvme/simple_copy/simple_copy.o 00:02:40.433 CC test/nvme/connect_stress/connect_stress.o 00:02:40.433 CC test/nvme/boot_partition/boot_partition.o 00:02:40.433 CC test/lvol/esnap/esnap.o 00:02:40.433 CC test/nvme/compliance/nvme_compliance.o 00:02:40.433 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:40.433 CC test/nvme/cuse/cuse.o 00:02:40.433 CC test/nvme/fdp/fdp.o 00:02:40.433 CC test/nvme/fused_ordering/fused_ordering.o 00:02:40.691 LINK vhost 00:02:40.691 LINK idxd_perf 00:02:40.691 LINK reserve 00:02:40.691 LINK boot_partition 00:02:40.691 LINK startup 00:02:40.691 LINK mkfs 00:02:40.691 LINK err_injection 00:02:40.691 LINK simple_copy 00:02:40.691 LINK doorbell_aers 00:02:40.691 LINK nvme_dp 00:02:40.691 LINK connect_stress 00:02:40.691 LINK sgl 00:02:40.691 LINK reset 00:02:40.691 CC examples/nvme/hotplug/hotplug.o 00:02:40.691 CC examples/nvme/reconnect/reconnect.o 00:02:40.691 CC examples/nvme/arbitration/arbitration.o 00:02:40.691 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:40.691 CC examples/nvme/hello_world/hello_world.o 00:02:40.691 CC examples/nvme/abort/abort.o 00:02:40.950 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:40.950 LINK aer 00:02:40.950 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:40.950 LINK fused_ordering 00:02:40.950 LINK overhead 00:02:40.950 LINK fdp 00:02:40.950 LINK nvme_compliance 00:02:40.950 CC examples/accel/perf/accel_perf.o 00:02:40.950 LINK dif 00:02:40.950 CC examples/blob/cli/blobcli.o 00:02:40.950 CC examples/blob/hello_world/hello_blob.o 00:02:40.950 LINK pmr_persistence 00:02:41.208 LINK memory_ut 00:02:41.208 LINK hello_world 00:02:41.208 LINK cmb_copy 00:02:41.208 LINK arbitration 00:02:41.208 LINK hotplug 00:02:41.208 LINK abort 00:02:41.208 LINK hello_blob 00:02:41.470 LINK nvme_manage 00:02:41.470 LINK reconnect 00:02:41.470 LINK accel_perf 00:02:41.470 CC test/bdev/bdevio/bdevio.o 00:02:41.836 LINK blobcli 00:02:41.837 CC examples/bdev/bdevperf/bdevperf.o 00:02:41.837 CC examples/bdev/hello_world/hello_bdev.o 00:02:42.095 LINK bdevio 00:02:42.095 LINK cuse 00:02:42.355 LINK iscsi_fuzz 00:02:42.355 LINK hello_bdev 00:02:43.735 LINK bdevperf 00:02:44.303 CC examples/nvmf/nvmf/nvmf.o 00:02:44.562 LINK nvmf 00:02:52.688 LINK esnap 00:02:52.688 00:02:52.688 real 1m13.200s 00:02:52.688 user 11m52.705s 00:02:52.688 sys 2m47.041s 00:02:52.688 19:56:55 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:52.688 19:56:55 make -- common/autotest_common.sh@10 -- $ set +x 00:02:52.688 ************************************ 00:02:52.688 END TEST make 00:02:52.688 ************************************ 00:02:52.688 19:56:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:52.688 19:56:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:52.688 19:56:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:52.688 19:56:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.688 19:56:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:52.688 19:56:55 -- pm/common@44 -- $ pid=1825459 00:02:52.688 19:56:55 -- pm/common@50 -- $ kill -TERM 1825459 00:02:52.688 19:56:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.688 19:56:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:52.688 19:56:55 -- pm/common@44 -- $ pid=1825461 00:02:52.688 19:56:55 -- pm/common@50 -- $ kill -TERM 1825461 00:02:52.688 19:56:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.688 19:56:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:52.688 19:56:55 -- pm/common@44 -- $ pid=1825463 00:02:52.688 19:56:55 -- pm/common@50 -- $ kill -TERM 1825463 00:02:52.688 19:56:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.688 19:56:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:52.688 19:56:55 -- pm/common@44 -- $ pid=1825499 00:02:52.688 19:56:55 -- pm/common@50 -- $ sudo -E kill -TERM 1825499 00:02:52.688 19:56:55 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:52.688 19:56:55 -- nvmf/common.sh@7 -- # uname -s 00:02:52.688 19:56:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:52.688 19:56:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:52.688 19:56:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:52.688 19:56:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:52.688 19:56:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:52.688 19:56:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:52.688 19:56:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:52.688 19:56:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:52.688 19:56:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:52.688 19:56:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:52.688 19:56:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:02:52.688 19:56:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:02:52.688 19:56:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:52.688 19:56:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:52.688 19:56:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:52.688 19:56:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:52.688 19:56:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:52.688 19:56:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:52.688 19:56:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:52.688 19:56:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:52.688 19:56:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.688 19:56:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.688 19:56:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.688 19:56:55 -- paths/export.sh@5 -- # export PATH 00:02:52.688 19:56:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.688 19:56:55 -- nvmf/common.sh@47 -- # : 0 00:02:52.688 19:56:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:52.688 19:56:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:52.688 19:56:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:52.688 19:56:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:52.689 19:56:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:52.689 19:56:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:52.689 19:56:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:52.689 19:56:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:52.689 19:56:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:52.689 19:56:55 -- spdk/autotest.sh@32 -- # uname -s 00:02:52.689 19:56:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:52.689 19:56:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:52.689 19:56:55 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:52.689 19:56:55 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:52.689 19:56:55 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:52.689 19:56:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:52.689 19:56:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:52.689 19:56:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:52.689 19:56:55 -- spdk/autotest.sh@48 -- # udevadm_pid=1884647 00:02:52.689 19:56:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:52.689 19:56:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:52.689 19:56:55 -- pm/common@17 -- # local monitor 00:02:52.689 19:56:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.689 19:56:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.689 19:56:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.689 19:56:55 -- pm/common@21 -- # date +%s 00:02:52.689 19:56:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.689 19:56:55 -- pm/common@21 -- # date +%s 00:02:52.689 19:56:55 -- pm/common@25 -- # sleep 1 00:02:52.689 19:56:55 -- pm/common@21 -- # date +%s 00:02:52.689 19:56:55 -- pm/common@21 -- # date +%s 00:02:52.689 19:56:55 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721843815 00:02:52.689 19:56:55 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721843815 00:02:52.689 19:56:55 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721843815 00:02:52.689 19:56:55 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721843815 00:02:52.689 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721843815_collect-vmstat.pm.log 00:02:52.689 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721843815_collect-cpu-load.pm.log 00:02:52.689 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721843815_collect-cpu-temp.pm.log 00:02:52.689 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721843815_collect-bmc-pm.bmc.pm.log 00:02:53.259 19:56:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:53.259 19:56:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:53.259 19:56:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:53.259 19:56:56 -- common/autotest_common.sh@10 -- # set +x 00:02:53.259 19:56:56 -- spdk/autotest.sh@59 -- # create_test_list 00:02:53.259 19:56:56 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:53.259 19:56:56 -- common/autotest_common.sh@10 -- # set +x 00:02:53.259 19:56:56 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:53.259 19:56:56 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:53.259 19:56:56 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:53.259 19:56:56 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:53.259 19:56:56 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:53.259 19:56:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:53.259 19:56:56 -- common/autotest_common.sh@1455 -- # uname 00:02:53.259 19:56:56 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:53.259 19:56:56 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:53.259 19:56:56 -- common/autotest_common.sh@1475 -- # uname 00:02:53.259 19:56:56 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:53.259 19:56:56 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:53.259 19:56:57 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:53.259 19:56:57 -- spdk/autotest.sh@72 -- # hash lcov 00:02:53.259 19:56:57 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:53.259 19:56:57 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:53.259 --rc lcov_branch_coverage=1 00:02:53.259 --rc lcov_function_coverage=1 00:02:53.259 --rc genhtml_branch_coverage=1 00:02:53.259 --rc genhtml_function_coverage=1 00:02:53.259 --rc genhtml_legend=1 00:02:53.259 --rc geninfo_all_blocks=1 00:02:53.259 ' 00:02:53.259 19:56:57 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:53.259 --rc lcov_branch_coverage=1 00:02:53.259 --rc lcov_function_coverage=1 00:02:53.259 --rc genhtml_branch_coverage=1 00:02:53.259 --rc genhtml_function_coverage=1 00:02:53.259 --rc genhtml_legend=1 00:02:53.259 --rc geninfo_all_blocks=1 00:02:53.259 ' 00:02:53.259 19:56:57 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:53.259 --rc lcov_branch_coverage=1 00:02:53.259 --rc lcov_function_coverage=1 00:02:53.259 --rc genhtml_branch_coverage=1 00:02:53.259 --rc genhtml_function_coverage=1 00:02:53.259 --rc genhtml_legend=1 00:02:53.259 --rc geninfo_all_blocks=1 00:02:53.259 --no-external' 00:02:53.259 19:56:57 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:53.259 --rc lcov_branch_coverage=1 00:02:53.259 --rc lcov_function_coverage=1 00:02:53.259 --rc genhtml_branch_coverage=1 00:02:53.259 --rc genhtml_function_coverage=1 00:02:53.259 --rc genhtml_legend=1 00:02:53.259 --rc geninfo_all_blocks=1 00:02:53.259 --no-external' 00:02:53.259 19:56:57 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:53.517 lcov: LCOV version 1.14 00:02:53.517 19:56:57 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:56.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:56.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:56.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:14.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:14.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:52.879 19:57:52 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:52.879 19:57:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:52.879 19:57:52 -- common/autotest_common.sh@10 -- # set +x 00:03:52.879 19:57:52 -- spdk/autotest.sh@91 -- # rm -f 00:03:52.879 19:57:52 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:52.879 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:03:52.879 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:52.879 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:52.879 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:52.879 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:52.879 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:52.879 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:52.879 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:52.879 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:52.879 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:52.879 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:52.879 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:52.879 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:52.879 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:52.879 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:52.879 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:52.879 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:52.879 19:57:54 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:52.879 19:57:54 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:52.879 19:57:54 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:52.879 19:57:54 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:52.879 19:57:54 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:52.879 19:57:54 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:52.879 19:57:54 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:52.879 19:57:54 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:52.879 19:57:54 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:52.879 19:57:54 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:52.879 19:57:54 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:52.879 19:57:54 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:52.879 19:57:54 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:52.879 19:57:54 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:52.879 19:57:54 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:52.879 No valid GPT data, bailing 00:03:52.879 19:57:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:52.879 19:57:54 -- scripts/common.sh@391 -- # pt= 00:03:52.879 19:57:54 -- scripts/common.sh@392 -- # return 1 00:03:52.879 19:57:54 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:52.879 1+0 records in 00:03:52.879 1+0 records out 00:03:52.879 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00389762 s, 269 MB/s 00:03:52.879 19:57:54 -- spdk/autotest.sh@118 -- # sync 00:03:52.879 19:57:54 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:52.879 19:57:54 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:52.879 19:57:54 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:53.816 19:57:57 -- spdk/autotest.sh@124 -- # uname -s 00:03:53.816 19:57:57 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:53.816 19:57:57 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:53.816 19:57:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:53.816 19:57:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:53.816 19:57:57 -- common/autotest_common.sh@10 -- # set +x 00:03:54.075 ************************************ 00:03:54.075 START TEST setup.sh 00:03:54.075 ************************************ 00:03:54.075 19:57:57 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:54.075 * Looking for test storage... 00:03:54.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:54.075 19:57:57 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:54.075 19:57:57 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:54.075 19:57:57 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:54.075 19:57:57 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:54.075 19:57:57 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:54.075 19:57:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:54.075 ************************************ 00:03:54.075 START TEST acl 00:03:54.075 ************************************ 00:03:54.075 19:57:57 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:54.075 * Looking for test storage... 00:03:54.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:54.075 19:57:57 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:54.075 19:57:57 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:54.075 19:57:57 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:54.075 19:57:57 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:54.075 19:57:57 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:54.075 19:57:57 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:54.075 19:57:57 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:54.076 19:57:57 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:54.076 19:57:57 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:54.076 19:57:57 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:54.076 19:57:57 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:54.076 19:57:57 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:54.076 19:57:57 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:54.076 19:57:57 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:54.076 19:57:57 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:54.076 19:57:57 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:56.606 19:58:00 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:56.606 19:58:00 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:56.606 19:58:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:56.606 19:58:00 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:56.606 19:58:00 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.606 19:58:00 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:57.982 Hugepages 00:03:57.982 node hugesize free / total 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.982 00:03:57.982 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:57.982 19:58:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.241 19:58:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:82:00.0 == *:*:*.* ]] 00:03:58.241 19:58:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:58.241 19:58:01 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:03:58.241 19:58:01 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:58.241 19:58:01 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:58.241 19:58:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.241 19:58:01 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:58.241 19:58:01 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:58.241 19:58:01 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:58.241 19:58:01 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:58.241 19:58:01 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:58.241 ************************************ 00:03:58.241 START TEST denied 00:03:58.241 ************************************ 00:03:58.241 19:58:01 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:58.241 19:58:01 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:82:00.0' 00:03:58.241 19:58:01 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:58.241 19:58:01 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:82:00.0' 00:03:58.241 19:58:01 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.241 19:58:01 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:00.144 0000:82:00.0 (8086 0a54): Skipping denied controller at 0000:82:00.0 00:04:00.144 19:58:03 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:82:00.0 00:04:00.144 19:58:03 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:00.144 19:58:03 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:00.144 19:58:03 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:82:00.0 ]] 00:04:00.144 19:58:03 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:82:00.0/driver 00:04:00.144 19:58:03 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:00.144 19:58:03 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:00.144 19:58:03 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:00.144 19:58:03 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.144 19:58:03 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:03.427 00:04:03.427 real 0m4.598s 00:04:03.427 user 0m1.351s 00:04:03.427 sys 0m2.368s 00:04:03.427 19:58:06 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.427 19:58:06 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:03.427 ************************************ 00:04:03.427 END TEST denied 00:04:03.427 ************************************ 00:04:03.427 19:58:06 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:03.427 19:58:06 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.427 19:58:06 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.427 19:58:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:03.427 ************************************ 00:04:03.427 START TEST allowed 00:04:03.427 ************************************ 00:04:03.427 19:58:06 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:04:03.427 19:58:06 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:82:00.0 00:04:03.427 19:58:06 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:03.427 19:58:06 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:82:00.0 .*: nvme -> .*' 00:04:03.427 19:58:06 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.427 19:58:06 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:05.969 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:05.969 19:58:09 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:05.969 19:58:09 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:05.969 19:58:09 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:05.969 19:58:09 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:05.970 19:58:09 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:07.877 00:04:07.877 real 0m4.898s 00:04:07.877 user 0m1.399s 00:04:07.877 sys 0m2.395s 00:04:07.877 19:58:11 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.877 19:58:11 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:07.877 ************************************ 00:04:07.877 END TEST allowed 00:04:07.877 ************************************ 00:04:07.877 00:04:07.877 real 0m13.692s 00:04:07.877 user 0m4.399s 00:04:07.877 sys 0m7.422s 00:04:07.877 19:58:11 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.877 19:58:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:07.877 ************************************ 00:04:07.877 END TEST acl 00:04:07.877 ************************************ 00:04:07.878 19:58:11 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:07.878 19:58:11 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.878 19:58:11 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.878 19:58:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:07.878 ************************************ 00:04:07.878 START TEST hugepages 00:04:07.878 ************************************ 00:04:07.878 19:58:11 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:07.878 * Looking for test storage... 00:04:07.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 27256108 kB' 'MemAvailable: 30833780 kB' 'Buffers: 2704 kB' 'Cached: 10157532 kB' 'SwapCached: 0 kB' 'Active: 7155480 kB' 'Inactive: 3506120 kB' 'Active(anon): 6761808 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504660 kB' 'Mapped: 163828 kB' 'Shmem: 6260444 kB' 'KReclaimable: 179836 kB' 'Slab: 526928 kB' 'SReclaimable: 179836 kB' 'SUnreclaim: 347092 kB' 'KernelStack: 12480 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28304780 kB' 'Committed_AS: 7867772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195504 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.878 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.879 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.139 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:08.140 19:58:11 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:08.140 19:58:11 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.140 19:58:11 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.140 19:58:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:08.140 ************************************ 00:04:08.140 START TEST default_setup 00:04:08.140 ************************************ 00:04:08.140 19:58:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:08.140 19:58:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:08.140 19:58:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:08.140 19:58:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:08.140 19:58:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:08.140 19:58:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:08.140 19:58:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:08.140 19:58:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:08.140 19:58:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:08.140 19:58:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:08.140 19:58:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:08.140 19:58:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:08.140 19:58:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:08.140 19:58:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:08.140 19:58:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:08.140 19:58:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:08.140 19:58:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:08.140 19:58:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:08.140 19:58:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:08.140 19:58:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:08.140 19:58:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:08.140 19:58:11 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.140 19:58:11 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:09.520 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:09.520 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:09.520 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:09.520 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:09.520 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:09.520 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:09.520 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:09.520 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:09.520 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:09.777 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:09.777 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:09.777 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:09.777 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:09.777 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:09.777 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:09.777 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:10.716 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:10.716 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:10.716 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:10.716 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.716 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.716 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:10.716 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:10.716 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:10.716 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.716 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.716 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.716 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:10.716 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:10.716 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:10.716 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.716 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.716 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29334084 kB' 'MemAvailable: 32911740 kB' 'Buffers: 2704 kB' 'Cached: 10157628 kB' 'SwapCached: 0 kB' 'Active: 7174788 kB' 'Inactive: 3506120 kB' 'Active(anon): 6781116 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524132 kB' 'Mapped: 164180 kB' 'Shmem: 6260540 kB' 'KReclaimable: 179804 kB' 'Slab: 526448 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346644 kB' 'KernelStack: 12752 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7887080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195744 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29337092 kB' 'MemAvailable: 32914748 kB' 'Buffers: 2704 kB' 'Cached: 10157632 kB' 'SwapCached: 0 kB' 'Active: 7173664 kB' 'Inactive: 3506120 kB' 'Active(anon): 6779992 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522972 kB' 'Mapped: 163856 kB' 'Shmem: 6260544 kB' 'KReclaimable: 179804 kB' 'Slab: 526416 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346612 kB' 'KernelStack: 12304 kB' 'PageTables: 7296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7887100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.717 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29337160 kB' 'MemAvailable: 32914816 kB' 'Buffers: 2704 kB' 'Cached: 10157648 kB' 'SwapCached: 0 kB' 'Active: 7173644 kB' 'Inactive: 3506120 kB' 'Active(anon): 6779972 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522640 kB' 'Mapped: 163856 kB' 'Shmem: 6260560 kB' 'KReclaimable: 179804 kB' 'Slab: 526480 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346676 kB' 'KernelStack: 12464 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7887120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195680 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.718 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:10.719 nr_hugepages=1024 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.719 resv_hugepages=0 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.719 surplus_hugepages=0 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.719 anon_hugepages=0 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.719 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29337160 kB' 'MemAvailable: 32914816 kB' 'Buffers: 2704 kB' 'Cached: 10157672 kB' 'SwapCached: 0 kB' 'Active: 7173668 kB' 'Inactive: 3506120 kB' 'Active(anon): 6779996 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522644 kB' 'Mapped: 163856 kB' 'Shmem: 6260584 kB' 'KReclaimable: 179804 kB' 'Slab: 526480 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346676 kB' 'KernelStack: 12464 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7887144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195680 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.720 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.981 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 12551428 kB' 'MemUsed: 12067984 kB' 'SwapCached: 0 kB' 'Active: 5764064 kB' 'Inactive: 3329772 kB' 'Active(anon): 5505432 kB' 'Inactive(anon): 0 kB' 'Active(file): 258632 kB' 'Inactive(file): 3329772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8776360 kB' 'Mapped: 80956 kB' 'AnonPages: 320556 kB' 'Shmem: 5187956 kB' 'KernelStack: 7768 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116720 kB' 'Slab: 282932 kB' 'SReclaimable: 116720 kB' 'SUnreclaim: 166212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.982 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:10.983 node0=1024 expecting 1024 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:10.983 00:04:10.983 real 0m2.798s 00:04:10.983 user 0m0.833s 00:04:10.983 sys 0m1.115s 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.983 19:58:14 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:10.983 ************************************ 00:04:10.983 END TEST default_setup 00:04:10.983 ************************************ 00:04:10.983 19:58:14 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:10.983 19:58:14 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.983 19:58:14 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.983 19:58:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:10.983 ************************************ 00:04:10.983 START TEST per_node_1G_alloc 00:04:10.983 ************************************ 00:04:10.983 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:10.983 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:10.983 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:10.983 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:10.983 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:10.983 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:10.983 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:10.983 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:10.983 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.983 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:10.984 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:10.984 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:10.984 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.984 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:10.984 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:10.984 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.984 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.984 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:10.984 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:10.984 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:10.984 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:10.984 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:10.984 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:10.984 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:10.984 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:10.984 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:10.984 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.984 19:58:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:12.361 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:12.361 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:12.361 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:12.361 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:12.361 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:12.361 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:12.361 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:12.361 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:12.361 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:12.361 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:12.361 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:12.361 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:12.361 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:12.361 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:12.361 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:12.361 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:12.361 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:12.623 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:12.623 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:12.623 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:12.623 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.623 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.623 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:12.623 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:12.623 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:12.623 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.623 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.623 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.623 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.623 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.623 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.623 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.623 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29295492 kB' 'MemAvailable: 32873148 kB' 'Buffers: 2704 kB' 'Cached: 10157748 kB' 'SwapCached: 0 kB' 'Active: 7174188 kB' 'Inactive: 3506120 kB' 'Active(anon): 6780516 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523092 kB' 'Mapped: 164064 kB' 'Shmem: 6260660 kB' 'KReclaimable: 179804 kB' 'Slab: 526668 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346864 kB' 'KernelStack: 12512 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7887332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195712 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.624 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29301068 kB' 'MemAvailable: 32878724 kB' 'Buffers: 2704 kB' 'Cached: 10157752 kB' 'SwapCached: 0 kB' 'Active: 7173872 kB' 'Inactive: 3506120 kB' 'Active(anon): 6780200 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522772 kB' 'Mapped: 163868 kB' 'Shmem: 6260664 kB' 'KReclaimable: 179804 kB' 'Slab: 526656 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346852 kB' 'KernelStack: 12448 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7887352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195680 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.625 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.626 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.627 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29301704 kB' 'MemAvailable: 32879360 kB' 'Buffers: 2704 kB' 'Cached: 10157768 kB' 'SwapCached: 0 kB' 'Active: 7173688 kB' 'Inactive: 3506120 kB' 'Active(anon): 6780016 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522604 kB' 'Mapped: 163868 kB' 'Shmem: 6260680 kB' 'KReclaimable: 179804 kB' 'Slab: 526656 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346852 kB' 'KernelStack: 12448 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7887372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195632 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.891 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.892 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:12.893 nr_hugepages=1024 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.893 resv_hugepages=0 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.893 surplus_hugepages=0 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:12.893 anon_hugepages=0 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29302592 kB' 'MemAvailable: 32880248 kB' 'Buffers: 2704 kB' 'Cached: 10157792 kB' 'SwapCached: 0 kB' 'Active: 7173884 kB' 'Inactive: 3506120 kB' 'Active(anon): 6780212 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522828 kB' 'Mapped: 163868 kB' 'Shmem: 6260704 kB' 'KReclaimable: 179804 kB' 'Slab: 526656 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346852 kB' 'KernelStack: 12464 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7887396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195632 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:12.894 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 13570884 kB' 'MemUsed: 11048528 kB' 'SwapCached: 0 kB' 'Active: 5764876 kB' 'Inactive: 3329772 kB' 'Active(anon): 5506244 kB' 'Inactive(anon): 0 kB' 'Active(file): 258632 kB' 'Inactive(file): 3329772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8776444 kB' 'Mapped: 80968 kB' 'AnonPages: 321344 kB' 'Shmem: 5188040 kB' 'KernelStack: 7816 kB' 'PageTables: 4600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116720 kB' 'Slab: 283060 kB' 'SReclaimable: 116720 kB' 'SUnreclaim: 166340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407244 kB' 'MemFree: 15731708 kB' 'MemUsed: 3675536 kB' 'SwapCached: 0 kB' 'Active: 1409084 kB' 'Inactive: 176348 kB' 'Active(anon): 1274044 kB' 'Inactive(anon): 0 kB' 'Active(file): 135040 kB' 'Inactive(file): 176348 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1384076 kB' 'Mapped: 82900 kB' 'AnonPages: 201476 kB' 'Shmem: 1072688 kB' 'KernelStack: 4648 kB' 'PageTables: 3264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63084 kB' 'Slab: 243596 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 180512 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:12.896 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.897 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:12.898 node0=512 expecting 512 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:12.898 node1=512 expecting 512 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:12.898 00:04:12.898 real 0m2.011s 00:04:12.898 user 0m0.879s 00:04:12.898 sys 0m1.106s 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.898 19:58:16 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:12.898 ************************************ 00:04:12.898 END TEST per_node_1G_alloc 00:04:12.898 ************************************ 00:04:12.898 19:58:16 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:12.898 19:58:16 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:12.898 19:58:16 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.898 19:58:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:13.158 ************************************ 00:04:13.158 START TEST even_2G_alloc 00:04:13.158 ************************************ 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.158 19:58:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:14.535 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:14.535 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:14.535 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:14.535 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:14.795 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:14.795 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:14.795 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:14.795 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:14.795 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:14.795 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:14.795 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:14.795 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:14.795 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:14.795 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:14.795 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:14.795 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:14.795 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:14.795 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29311856 kB' 'MemAvailable: 32889512 kB' 'Buffers: 2704 kB' 'Cached: 10157888 kB' 'SwapCached: 0 kB' 'Active: 7173024 kB' 'Inactive: 3506120 kB' 'Active(anon): 6779352 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521840 kB' 'Mapped: 163396 kB' 'Shmem: 6260800 kB' 'KReclaimable: 179804 kB' 'Slab: 526448 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346644 kB' 'KernelStack: 12336 kB' 'PageTables: 7396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7877896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.796 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29306072 kB' 'MemAvailable: 32883728 kB' 'Buffers: 2704 kB' 'Cached: 10157892 kB' 'SwapCached: 0 kB' 'Active: 7175204 kB' 'Inactive: 3506120 kB' 'Active(anon): 6781532 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524072 kB' 'Mapped: 163280 kB' 'Shmem: 6260804 kB' 'KReclaimable: 179804 kB' 'Slab: 526448 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346644 kB' 'KernelStack: 12400 kB' 'PageTables: 7516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7880160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195648 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.797 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.062 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.063 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29311356 kB' 'MemAvailable: 32889012 kB' 'Buffers: 2704 kB' 'Cached: 10157908 kB' 'SwapCached: 0 kB' 'Active: 7172772 kB' 'Inactive: 3506120 kB' 'Active(anon): 6779100 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521500 kB' 'Mapped: 163280 kB' 'Shmem: 6260820 kB' 'KReclaimable: 179804 kB' 'Slab: 526448 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346644 kB' 'KernelStack: 12384 kB' 'PageTables: 7480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7877672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195616 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.064 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.065 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:15.066 nr_hugepages=1024 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:15.066 resv_hugepages=0 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:15.066 surplus_hugepages=0 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:15.066 anon_hugepages=0 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29311612 kB' 'MemAvailable: 32889268 kB' 'Buffers: 2704 kB' 'Cached: 10157912 kB' 'SwapCached: 0 kB' 'Active: 7176420 kB' 'Inactive: 3506120 kB' 'Active(anon): 6782748 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525252 kB' 'Mapped: 163684 kB' 'Shmem: 6260824 kB' 'KReclaimable: 179804 kB' 'Slab: 526448 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346644 kB' 'KernelStack: 12416 kB' 'PageTables: 7592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7881272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195620 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.066 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.067 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 13585832 kB' 'MemUsed: 11033580 kB' 'SwapCached: 0 kB' 'Active: 5762916 kB' 'Inactive: 3329772 kB' 'Active(anon): 5504284 kB' 'Inactive(anon): 0 kB' 'Active(file): 258632 kB' 'Inactive(file): 3329772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8776508 kB' 'Mapped: 80248 kB' 'AnonPages: 319352 kB' 'Shmem: 5188104 kB' 'KernelStack: 7752 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116720 kB' 'Slab: 282912 kB' 'SReclaimable: 116720 kB' 'SUnreclaim: 166192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.068 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407244 kB' 'MemFree: 15730952 kB' 'MemUsed: 3676292 kB' 'SwapCached: 0 kB' 'Active: 1408292 kB' 'Inactive: 176348 kB' 'Active(anon): 1273252 kB' 'Inactive(anon): 0 kB' 'Active(file): 135040 kB' 'Inactive(file): 176348 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1384168 kB' 'Mapped: 82596 kB' 'AnonPages: 200596 kB' 'Shmem: 1072780 kB' 'KernelStack: 4664 kB' 'PageTables: 3244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63084 kB' 'Slab: 243536 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 180452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.069 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.070 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:15.071 node0=512 expecting 512 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:15.071 node1=512 expecting 512 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:15.071 00:04:15.071 real 0m2.117s 00:04:15.071 user 0m0.875s 00:04:15.071 sys 0m1.229s 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.071 19:58:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:15.071 ************************************ 00:04:15.071 END TEST even_2G_alloc 00:04:15.071 ************************************ 00:04:15.330 19:58:18 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:15.330 19:58:18 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.330 19:58:18 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.330 19:58:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:15.330 ************************************ 00:04:15.330 START TEST odd_alloc 00:04:15.330 ************************************ 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.330 19:58:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:16.708 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:16.708 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:16.708 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:16.708 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:16.708 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:16.708 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:16.708 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:16.708 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:16.708 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:16.708 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:16.708 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:16.708 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:16.708 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:16.708 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:16.708 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:16.708 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:16.708 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29347892 kB' 'MemAvailable: 32925548 kB' 'Buffers: 2704 kB' 'Cached: 10158024 kB' 'SwapCached: 0 kB' 'Active: 7172008 kB' 'Inactive: 3506120 kB' 'Active(anon): 6778336 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520624 kB' 'Mapped: 162872 kB' 'Shmem: 6260936 kB' 'KReclaimable: 179804 kB' 'Slab: 526520 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346716 kB' 'KernelStack: 12448 kB' 'PageTables: 7576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352332 kB' 'Committed_AS: 7875512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.970 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29347892 kB' 'MemAvailable: 32925548 kB' 'Buffers: 2704 kB' 'Cached: 10158028 kB' 'SwapCached: 0 kB' 'Active: 7171688 kB' 'Inactive: 3506120 kB' 'Active(anon): 6778016 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520292 kB' 'Mapped: 162868 kB' 'Shmem: 6260940 kB' 'KReclaimable: 179804 kB' 'Slab: 526520 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346716 kB' 'KernelStack: 12432 kB' 'PageTables: 7504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352332 kB' 'Committed_AS: 7875532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.971 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29348248 kB' 'MemAvailable: 32925904 kB' 'Buffers: 2704 kB' 'Cached: 10158044 kB' 'SwapCached: 0 kB' 'Active: 7171664 kB' 'Inactive: 3506120 kB' 'Active(anon): 6777992 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520308 kB' 'Mapped: 162868 kB' 'Shmem: 6260956 kB' 'KReclaimable: 179804 kB' 'Slab: 526532 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346728 kB' 'KernelStack: 12448 kB' 'PageTables: 7572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352332 kB' 'Committed_AS: 7875552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.972 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:16.973 nr_hugepages=1025 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:16.973 resv_hugepages=0 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:16.973 surplus_hugepages=0 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:16.973 anon_hugepages=0 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29348248 kB' 'MemAvailable: 32925904 kB' 'Buffers: 2704 kB' 'Cached: 10158084 kB' 'SwapCached: 0 kB' 'Active: 7171296 kB' 'Inactive: 3506120 kB' 'Active(anon): 6777624 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519868 kB' 'Mapped: 162868 kB' 'Shmem: 6260996 kB' 'KReclaimable: 179804 kB' 'Slab: 526532 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346728 kB' 'KernelStack: 12416 kB' 'PageTables: 7472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352332 kB' 'Committed_AS: 7875572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.973 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.974 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.974 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.974 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.974 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.974 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.974 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.974 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.974 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.974 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.974 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.974 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.974 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.974 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.974 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.974 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.974 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.974 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.974 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.974 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.974 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.974 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.234 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 13622244 kB' 'MemUsed: 10997168 kB' 'SwapCached: 0 kB' 'Active: 5763104 kB' 'Inactive: 3329772 kB' 'Active(anon): 5504472 kB' 'Inactive(anon): 0 kB' 'Active(file): 258632 kB' 'Inactive(file): 3329772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8776520 kB' 'Mapped: 80272 kB' 'AnonPages: 319516 kB' 'Shmem: 5188116 kB' 'KernelStack: 7768 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116720 kB' 'Slab: 282944 kB' 'SReclaimable: 116720 kB' 'SUnreclaim: 166224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.235 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407244 kB' 'MemFree: 15729088 kB' 'MemUsed: 3678156 kB' 'SwapCached: 0 kB' 'Active: 1408844 kB' 'Inactive: 176348 kB' 'Active(anon): 1273804 kB' 'Inactive(anon): 0 kB' 'Active(file): 135040 kB' 'Inactive(file): 176348 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1384268 kB' 'Mapped: 82596 kB' 'AnonPages: 201044 kB' 'Shmem: 1072880 kB' 'KernelStack: 4712 kB' 'PageTables: 3392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63084 kB' 'Slab: 243588 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 180504 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.236 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.237 19:58:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:17.238 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.238 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.238 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.238 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.238 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:17.238 node0=512 expecting 513 00:04:17.238 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.238 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.238 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.238 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:17.238 node1=513 expecting 512 00:04:17.238 19:58:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:17.238 00:04:17.238 real 0m1.950s 00:04:17.238 user 0m0.755s 00:04:17.238 sys 0m1.173s 00:04:17.238 19:58:20 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:17.238 19:58:20 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:17.238 ************************************ 00:04:17.238 END TEST odd_alloc 00:04:17.238 ************************************ 00:04:17.238 19:58:20 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:17.238 19:58:20 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:17.238 19:58:20 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:17.238 19:58:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:17.238 ************************************ 00:04:17.238 START TEST custom_alloc 00:04:17.238 ************************************ 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.238 19:58:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:19.153 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:19.153 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:19.153 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:19.153 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:19.153 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:19.153 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:19.153 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:19.153 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:19.153 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:19.153 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:19.153 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:19.153 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:19.153 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:19.153 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:19.153 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:19.153 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:19.153 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 28286584 kB' 'MemAvailable: 31864240 kB' 'Buffers: 2704 kB' 'Cached: 10158152 kB' 'SwapCached: 0 kB' 'Active: 7172592 kB' 'Inactive: 3506120 kB' 'Active(anon): 6778920 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521020 kB' 'Mapped: 162972 kB' 'Shmem: 6261064 kB' 'KReclaimable: 179804 kB' 'Slab: 526556 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346752 kB' 'KernelStack: 12816 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829068 kB' 'Committed_AS: 7876764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.153 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 28286672 kB' 'MemAvailable: 31864328 kB' 'Buffers: 2704 kB' 'Cached: 10158156 kB' 'SwapCached: 0 kB' 'Active: 7174256 kB' 'Inactive: 3506120 kB' 'Active(anon): 6780584 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522692 kB' 'Mapped: 162972 kB' 'Shmem: 6261068 kB' 'KReclaimable: 179804 kB' 'Slab: 526540 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346736 kB' 'KernelStack: 13232 kB' 'PageTables: 9308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829068 kB' 'Committed_AS: 7878148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:19.154 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.155 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 28286144 kB' 'MemAvailable: 31863800 kB' 'Buffers: 2704 kB' 'Cached: 10158168 kB' 'SwapCached: 0 kB' 'Active: 7172096 kB' 'Inactive: 3506120 kB' 'Active(anon): 6778424 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520520 kB' 'Mapped: 162892 kB' 'Shmem: 6261080 kB' 'KReclaimable: 179804 kB' 'Slab: 526620 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346816 kB' 'KernelStack: 12464 kB' 'PageTables: 7600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829068 kB' 'Committed_AS: 7875612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195728 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.156 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.157 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:19.158 nr_hugepages=1536 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:19.158 resv_hugepages=0 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:19.158 surplus_hugepages=0 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:19.158 anon_hugepages=0 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.158 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 28284896 kB' 'MemAvailable: 31862552 kB' 'Buffers: 2704 kB' 'Cached: 10158192 kB' 'SwapCached: 0 kB' 'Active: 7171956 kB' 'Inactive: 3506120 kB' 'Active(anon): 6778284 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520364 kB' 'Mapped: 162884 kB' 'Shmem: 6261104 kB' 'KReclaimable: 179804 kB' 'Slab: 526620 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346816 kB' 'KernelStack: 12416 kB' 'PageTables: 7452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829068 kB' 'Committed_AS: 7875464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.159 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 13623944 kB' 'MemUsed: 10995468 kB' 'SwapCached: 0 kB' 'Active: 5763788 kB' 'Inactive: 3329772 kB' 'Active(anon): 5505156 kB' 'Inactive(anon): 0 kB' 'Active(file): 258632 kB' 'Inactive(file): 3329772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8776580 kB' 'Mapped: 80288 kB' 'AnonPages: 320064 kB' 'Shmem: 5188176 kB' 'KernelStack: 7768 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116720 kB' 'Slab: 282872 kB' 'SReclaimable: 116720 kB' 'SUnreclaim: 166152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.160 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.161 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407244 kB' 'MemFree: 14660952 kB' 'MemUsed: 4746292 kB' 'SwapCached: 0 kB' 'Active: 1408176 kB' 'Inactive: 176348 kB' 'Active(anon): 1273136 kB' 'Inactive(anon): 0 kB' 'Active(file): 135040 kB' 'Inactive(file): 176348 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1384344 kB' 'Mapped: 82596 kB' 'AnonPages: 200228 kB' 'Shmem: 1072956 kB' 'KernelStack: 4664 kB' 'PageTables: 3228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63084 kB' 'Slab: 243748 kB' 'SReclaimable: 63084 kB' 'SUnreclaim: 180664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.162 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:19.163 node0=512 expecting 512 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:19.163 node1=1024 expecting 1024 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:19.163 00:04:19.163 real 0m2.020s 00:04:19.163 user 0m0.856s 00:04:19.163 sys 0m1.150s 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:19.163 19:58:22 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:19.163 ************************************ 00:04:19.163 END TEST custom_alloc 00:04:19.163 ************************************ 00:04:19.449 19:58:22 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:19.449 19:58:22 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.449 19:58:22 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.449 19:58:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:19.449 ************************************ 00:04:19.449 START TEST no_shrink_alloc 00:04:19.449 ************************************ 00:04:19.449 19:58:22 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:19.449 19:58:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:19.449 19:58:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:19.449 19:58:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:19.449 19:58:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:19.449 19:58:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:19.449 19:58:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:19.449 19:58:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:19.449 19:58:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:19.449 19:58:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:19.449 19:58:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:19.449 19:58:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:19.449 19:58:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:19.449 19:58:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:19.449 19:58:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:19.449 19:58:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:19.449 19:58:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:19.449 19:58:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:19.449 19:58:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:19.449 19:58:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:19.449 19:58:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:19.449 19:58:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.449 19:58:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:20.826 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:20.826 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:20.826 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:20.826 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:20.826 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:21.085 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:21.085 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:21.085 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:21.085 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:21.085 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:21.085 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:21.085 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:21.085 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:21.085 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:21.085 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:21.086 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:21.086 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29306144 kB' 'MemAvailable: 32883800 kB' 'Buffers: 2704 kB' 'Cached: 10158284 kB' 'SwapCached: 0 kB' 'Active: 7172268 kB' 'Inactive: 3506120 kB' 'Active(anon): 6778596 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520632 kB' 'Mapped: 163064 kB' 'Shmem: 6261196 kB' 'KReclaimable: 179804 kB' 'Slab: 526656 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346852 kB' 'KernelStack: 12448 kB' 'PageTables: 7580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7875872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195728 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.086 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.087 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.350 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.350 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.350 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.350 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.350 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.350 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.350 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.350 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.350 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.350 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.350 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.350 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.350 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.350 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.350 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.350 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.350 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.350 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.350 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.350 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.350 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.350 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.350 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.350 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29316944 kB' 'MemAvailable: 32894600 kB' 'Buffers: 2704 kB' 'Cached: 10158288 kB' 'SwapCached: 0 kB' 'Active: 7172084 kB' 'Inactive: 3506120 kB' 'Active(anon): 6778412 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520484 kB' 'Mapped: 162972 kB' 'Shmem: 6261200 kB' 'KReclaimable: 179804 kB' 'Slab: 526624 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346820 kB' 'KernelStack: 12448 kB' 'PageTables: 7520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7875888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195728 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.351 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.352 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29318948 kB' 'MemAvailable: 32896604 kB' 'Buffers: 2704 kB' 'Cached: 10158324 kB' 'SwapCached: 0 kB' 'Active: 7172640 kB' 'Inactive: 3506120 kB' 'Active(anon): 6778968 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521004 kB' 'Mapped: 162896 kB' 'Shmem: 6261236 kB' 'KReclaimable: 179804 kB' 'Slab: 526616 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346812 kB' 'KernelStack: 12480 kB' 'PageTables: 7740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7878480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195744 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.353 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:21.354 nr_hugepages=1024 00:04:21.354 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.354 resv_hugepages=0 00:04:21.355 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.355 surplus_hugepages=0 00:04:21.355 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.355 anon_hugepages=0 00:04:21.355 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.355 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:21.355 19:58:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29319060 kB' 'MemAvailable: 32896716 kB' 'Buffers: 2704 kB' 'Cached: 10158328 kB' 'SwapCached: 0 kB' 'Active: 7173888 kB' 'Inactive: 3506120 kB' 'Active(anon): 6780216 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521728 kB' 'Mapped: 162896 kB' 'Shmem: 6261240 kB' 'KReclaimable: 179804 kB' 'Slab: 526616 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346812 kB' 'KernelStack: 12912 kB' 'PageTables: 8824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7878500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195904 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.355 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.356 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 12571776 kB' 'MemUsed: 12047636 kB' 'SwapCached: 0 kB' 'Active: 5764132 kB' 'Inactive: 3329772 kB' 'Active(anon): 5505500 kB' 'Inactive(anon): 0 kB' 'Active(file): 258632 kB' 'Inactive(file): 3329772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8776708 kB' 'Mapped: 80300 kB' 'AnonPages: 320416 kB' 'Shmem: 5188304 kB' 'KernelStack: 7800 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116720 kB' 'Slab: 282856 kB' 'SReclaimable: 116720 kB' 'SUnreclaim: 166136 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.357 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:21.358 node0=1024 expecting 1024 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.358 19:58:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:23.265 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:23.265 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:23.265 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:23.265 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:23.265 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:23.265 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:23.265 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:23.265 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:23.265 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:23.265 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:23.265 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:23.265 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:23.265 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:23.265 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:23.265 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:23.265 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:23.265 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:23.265 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:23.265 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29354120 kB' 'MemAvailable: 32931776 kB' 'Buffers: 2704 kB' 'Cached: 10158400 kB' 'SwapCached: 0 kB' 'Active: 7172464 kB' 'Inactive: 3506120 kB' 'Active(anon): 6778792 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520668 kB' 'Mapped: 162988 kB' 'Shmem: 6261312 kB' 'KReclaimable: 179804 kB' 'Slab: 526636 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346832 kB' 'KernelStack: 12496 kB' 'PageTables: 7608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7876116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195792 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29354824 kB' 'MemAvailable: 32932480 kB' 'Buffers: 2704 kB' 'Cached: 10158400 kB' 'SwapCached: 0 kB' 'Active: 7172364 kB' 'Inactive: 3506120 kB' 'Active(anon): 6778692 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520572 kB' 'Mapped: 162904 kB' 'Shmem: 6261312 kB' 'KReclaimable: 179804 kB' 'Slab: 526592 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346788 kB' 'KernelStack: 12496 kB' 'PageTables: 7492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7876132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195744 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.269 19:58:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29355208 kB' 'MemAvailable: 32932864 kB' 'Buffers: 2704 kB' 'Cached: 10158424 kB' 'SwapCached: 0 kB' 'Active: 7175544 kB' 'Inactive: 3506120 kB' 'Active(anon): 6781872 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523860 kB' 'Mapped: 163340 kB' 'Shmem: 6261336 kB' 'KReclaimable: 179804 kB' 'Slab: 526644 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346840 kB' 'KernelStack: 12512 kB' 'PageTables: 7572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7879888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195728 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.269 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.270 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.271 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:23.533 nr_hugepages=1024 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.533 resv_hugepages=0 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.533 surplus_hugepages=0 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.533 anon_hugepages=0 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29356388 kB' 'MemAvailable: 32934044 kB' 'Buffers: 2704 kB' 'Cached: 10158424 kB' 'SwapCached: 0 kB' 'Active: 7177836 kB' 'Inactive: 3506120 kB' 'Active(anon): 6784164 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526112 kB' 'Mapped: 163676 kB' 'Shmem: 6261336 kB' 'KReclaimable: 179804 kB' 'Slab: 526644 kB' 'SReclaimable: 179804 kB' 'SUnreclaim: 346840 kB' 'KernelStack: 12512 kB' 'PageTables: 7596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7882296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195732 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1717852 kB' 'DirectMap2M: 11833344 kB' 'DirectMap1G: 38797312 kB' 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 12584292 kB' 'MemUsed: 12035120 kB' 'SwapCached: 0 kB' 'Active: 5763404 kB' 'Inactive: 3329772 kB' 'Active(anon): 5504772 kB' 'Inactive(anon): 0 kB' 'Active(file): 258632 kB' 'Inactive(file): 3329772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8776776 kB' 'Mapped: 80308 kB' 'AnonPages: 319556 kB' 'Shmem: 5188372 kB' 'KernelStack: 7848 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116720 kB' 'Slab: 282924 kB' 'SReclaimable: 116720 kB' 'SUnreclaim: 166204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.536 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.537 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.537 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.537 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.537 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.537 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:23.537 node0=1024 expecting 1024 00:04:23.537 19:58:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:23.537 00:04:23.537 real 0m4.172s 00:04:23.537 user 0m1.663s 00:04:23.537 sys 0m2.473s 00:04:23.537 19:58:27 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.537 19:58:27 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:23.537 ************************************ 00:04:23.537 END TEST no_shrink_alloc 00:04:23.537 ************************************ 00:04:23.537 19:58:27 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:23.537 19:58:27 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:23.537 19:58:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:23.537 19:58:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:23.537 19:58:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:23.537 19:58:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:23.537 19:58:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:23.537 19:58:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:23.537 19:58:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:23.537 19:58:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:23.537 19:58:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:23.537 19:58:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:23.537 19:58:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:23.537 19:58:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:23.537 00:04:23.537 real 0m15.665s 00:04:23.537 user 0m6.100s 00:04:23.537 sys 0m8.635s 00:04:23.537 19:58:27 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.537 19:58:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:23.537 ************************************ 00:04:23.537 END TEST hugepages 00:04:23.537 ************************************ 00:04:23.537 19:58:27 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:23.537 19:58:27 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.537 19:58:27 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.537 19:58:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:23.537 ************************************ 00:04:23.537 START TEST driver 00:04:23.537 ************************************ 00:04:23.537 19:58:27 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:23.796 * Looking for test storage... 00:04:23.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:23.796 19:58:27 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:23.796 19:58:27 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:23.796 19:58:27 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:27.080 19:58:30 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:27.080 19:58:30 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.080 19:58:30 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.080 19:58:30 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:27.080 ************************************ 00:04:27.080 START TEST guess_driver 00:04:27.080 ************************************ 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 143 > 0 )) 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:27.080 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:27.080 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:27.080 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:27.080 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:27.080 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:27.080 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:27.080 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:27.080 Looking for driver=vfio-pci 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.080 19:58:30 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:28.456 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.456 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.456 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.456 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.456 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.456 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.456 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.456 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.456 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.457 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.715 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:28.715 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:28.715 19:58:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.648 19:58:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.648 19:58:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.648 19:58:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.648 19:58:33 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:29.648 19:58:33 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:29.648 19:58:33 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.648 19:58:33 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:32.933 00:04:32.933 real 0m5.921s 00:04:32.933 user 0m1.418s 00:04:32.933 sys 0m2.604s 00:04:32.933 19:58:36 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.933 19:58:36 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:32.933 ************************************ 00:04:32.933 END TEST guess_driver 00:04:32.933 ************************************ 00:04:32.933 00:04:32.933 real 0m9.111s 00:04:32.933 user 0m2.092s 00:04:32.933 sys 0m4.071s 00:04:32.933 19:58:36 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.933 19:58:36 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:32.933 ************************************ 00:04:32.933 END TEST driver 00:04:32.933 ************************************ 00:04:32.933 19:58:36 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:32.933 19:58:36 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.933 19:58:36 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.933 19:58:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:32.933 ************************************ 00:04:32.933 START TEST devices 00:04:32.933 ************************************ 00:04:32.933 19:58:36 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:32.933 * Looking for test storage... 00:04:32.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:32.933 19:58:36 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:32.933 19:58:36 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:32.933 19:58:36 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:32.933 19:58:36 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:34.839 19:58:38 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:34.840 19:58:38 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:34.840 19:58:38 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:34.840 19:58:38 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:34.840 19:58:38 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:34.840 19:58:38 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:34.840 19:58:38 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:34.840 19:58:38 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:34.840 19:58:38 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:34.840 19:58:38 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:34.840 19:58:38 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:34.840 19:58:38 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:34.840 19:58:38 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:34.840 19:58:38 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:34.840 19:58:38 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:34.840 19:58:38 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:34.840 19:58:38 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:34.840 19:58:38 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:82:00.0 00:04:34.840 19:58:38 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:04:34.840 19:58:38 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:34.840 19:58:38 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:34.840 19:58:38 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:34.840 No valid GPT data, bailing 00:04:34.840 19:58:38 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:35.100 19:58:38 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:35.100 19:58:38 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:35.100 19:58:38 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:35.100 19:58:38 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:35.100 19:58:38 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:35.100 19:58:38 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:35.100 19:58:38 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:35.100 19:58:38 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:35.100 19:58:38 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:82:00.0 00:04:35.100 19:58:38 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:35.100 19:58:38 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:35.100 19:58:38 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:35.100 19:58:38 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.100 19:58:38 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.100 19:58:38 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:35.100 ************************************ 00:04:35.100 START TEST nvme_mount 00:04:35.100 ************************************ 00:04:35.100 19:58:38 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:35.100 19:58:38 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:35.100 19:58:38 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:35.100 19:58:38 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.100 19:58:38 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.100 19:58:38 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:35.100 19:58:38 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:35.100 19:58:38 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:35.100 19:58:38 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:35.100 19:58:38 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:35.100 19:58:38 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:35.100 19:58:38 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:35.100 19:58:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:35.100 19:58:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.100 19:58:38 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:35.100 19:58:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:35.100 19:58:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.100 19:58:38 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:35.100 19:58:38 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:35.100 19:58:38 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:36.037 Creating new GPT entries in memory. 00:04:36.037 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:36.037 other utilities. 00:04:36.037 19:58:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:36.037 19:58:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.037 19:58:39 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:36.038 19:58:39 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:36.038 19:58:39 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:36.974 Creating new GPT entries in memory. 00:04:36.974 The operation has completed successfully. 00:04:36.974 19:58:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:36.974 19:58:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.974 19:58:40 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1908939 00:04:36.974 19:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.974 19:58:40 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:36.974 19:58:40 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.974 19:58:40 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:36.974 19:58:40 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:37.252 19:58:40 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.252 19:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:82:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:37.252 19:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:37.252 19:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:37.252 19:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.252 19:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:37.252 19:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:37.252 19:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:37.252 19:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:37.252 19:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:37.252 19:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.252 19:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:37.252 19:58:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:37.252 19:58:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.252 19:58:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:39.173 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:39.173 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:39.432 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:39.432 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:39.432 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:39.432 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:39.432 19:58:42 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:39.432 19:58:42 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:39.432 19:58:42 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.432 19:58:42 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:39.432 19:58:42 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:39.432 19:58:43 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.432 19:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:82:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:39.432 19:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:39.432 19:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:39.432 19:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.432 19:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:39.432 19:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:39.432 19:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:39.432 19:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:39.432 19:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:39.432 19:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.432 19:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:39.432 19:58:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:39.432 19:58:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.432 19:58:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:40.809 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.068 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:41.068 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:41.068 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:41.068 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:41.068 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:41.068 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:41.068 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:82:00.0 data@nvme0n1 '' '' 00:04:41.068 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:41.068 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:41.068 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:41.068 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:41.068 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:41.068 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:41.068 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:41.068 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.068 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:41.068 19:58:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:41.068 19:58:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.068 19:58:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:42.448 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.708 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:42.708 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:42.708 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:42.708 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:42.708 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.708 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:42.708 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:42.708 19:58:46 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:42.708 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:42.708 00:04:42.708 real 0m7.714s 00:04:42.708 user 0m1.915s 00:04:42.708 sys 0m3.450s 00:04:42.708 19:58:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.708 19:58:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:42.708 ************************************ 00:04:42.708 END TEST nvme_mount 00:04:42.708 ************************************ 00:04:42.708 19:58:46 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:42.708 19:58:46 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.708 19:58:46 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.708 19:58:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:42.708 ************************************ 00:04:42.708 START TEST dm_mount 00:04:42.708 ************************************ 00:04:42.708 19:58:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:42.708 19:58:46 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:42.708 19:58:46 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:42.708 19:58:46 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:42.708 19:58:46 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:42.708 19:58:46 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:42.708 19:58:46 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:42.708 19:58:46 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:42.708 19:58:46 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:42.708 19:58:46 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:42.708 19:58:46 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:42.708 19:58:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:42.708 19:58:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.708 19:58:46 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:42.708 19:58:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:42.708 19:58:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.708 19:58:46 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:42.708 19:58:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:42.708 19:58:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.708 19:58:46 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:42.708 19:58:46 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:42.708 19:58:46 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:44.087 Creating new GPT entries in memory. 00:04:44.087 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:44.087 other utilities. 00:04:44.087 19:58:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:44.087 19:58:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:44.087 19:58:47 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:44.087 19:58:47 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:44.087 19:58:47 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:45.024 Creating new GPT entries in memory. 00:04:45.024 The operation has completed successfully. 00:04:45.024 19:58:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:45.024 19:58:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.024 19:58:48 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:45.024 19:58:48 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:45.024 19:58:48 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:45.961 The operation has completed successfully. 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1911489 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:82:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.961 19:58:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.339 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.340 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.599 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.599 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:47.599 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:47.599 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:47.599 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:47.599 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:47.599 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:82:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:47.599 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:47.599 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:47.599 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:47.599 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:47.599 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:47.599 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:47.599 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:47.599 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.599 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:47.599 19:58:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:47.599 19:58:51 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.599 19:58:51 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:48.976 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.234 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.234 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:49.234 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:49.234 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:49.234 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:49.234 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:49.235 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:49.235 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:49.235 19:58:52 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:49.235 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:49.235 19:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:49.235 19:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:49.235 00:04:49.235 real 0m6.560s 00:04:49.235 user 0m1.248s 00:04:49.235 sys 0m2.193s 00:04:49.235 19:58:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.235 19:58:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:49.235 ************************************ 00:04:49.235 END TEST dm_mount 00:04:49.235 ************************************ 00:04:49.493 19:58:53 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:49.493 19:58:53 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:49.494 19:58:53 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.494 19:58:53 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:49.494 19:58:53 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:49.494 19:58:53 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:49.494 19:58:53 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:49.756 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:49.756 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:49.756 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:49.756 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:49.756 19:58:53 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:49.756 19:58:53 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:49.756 19:58:53 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:49.756 19:58:53 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:49.756 19:58:53 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:49.756 19:58:53 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:49.756 19:58:53 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:49.756 00:04:49.756 real 0m16.902s 00:04:49.756 user 0m4.057s 00:04:49.756 sys 0m7.182s 00:04:49.756 19:58:53 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.756 19:58:53 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:49.756 ************************************ 00:04:49.756 END TEST devices 00:04:49.756 ************************************ 00:04:49.756 00:04:49.756 real 0m55.719s 00:04:49.756 user 0m16.789s 00:04:49.756 sys 0m27.539s 00:04:49.756 19:58:53 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.756 19:58:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:49.756 ************************************ 00:04:49.756 END TEST setup.sh 00:04:49.756 ************************************ 00:04:49.756 19:58:53 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:51.662 Hugepages 00:04:51.662 node hugesize free / total 00:04:51.662 node0 1048576kB 0 / 0 00:04:51.662 node0 2048kB 2048 / 2048 00:04:51.662 node1 1048576kB 0 / 0 00:04:51.662 node1 2048kB 0 / 0 00:04:51.662 00:04:51.662 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:51.662 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:51.662 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:51.662 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:51.662 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:51.662 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:51.662 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:51.662 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:51.662 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:51.662 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:51.662 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:51.662 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:51.662 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:51.662 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:51.662 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:51.662 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:51.662 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:51.662 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:51.662 19:58:55 -- spdk/autotest.sh@130 -- # uname -s 00:04:51.662 19:58:55 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:51.662 19:58:55 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:51.662 19:58:55 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:53.566 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:53.566 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:53.566 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:53.566 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:53.566 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:53.566 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:53.566 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:53.566 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:53.566 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:53.566 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:53.566 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:53.566 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:53.566 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:53.566 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:53.566 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:53.566 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:54.536 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:54.536 19:58:58 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:55.471 19:58:59 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:55.471 19:58:59 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:55.471 19:58:59 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:55.471 19:58:59 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:55.471 19:58:59 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:55.471 19:58:59 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:55.471 19:58:59 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:55.471 19:58:59 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:55.471 19:58:59 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:55.730 19:58:59 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:55.730 19:58:59 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:04:55.730 19:58:59 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:57.106 Waiting for block devices as requested 00:04:57.106 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:04:57.365 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:57.365 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:57.365 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:57.623 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:57.623 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:57.623 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:57.882 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:57.882 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:57.882 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:57.882 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:58.141 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:58.141 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:58.141 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:58.141 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:58.399 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:58.399 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:58.399 19:59:02 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:58.399 19:59:02 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:04:58.399 19:59:02 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:58.399 19:59:02 -- common/autotest_common.sh@1502 -- # grep 0000:82:00.0/nvme/nvme 00:04:58.658 19:59:02 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:04:58.658 19:59:02 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:04:58.658 19:59:02 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:04:58.658 19:59:02 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:58.658 19:59:02 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:58.658 19:59:02 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:58.658 19:59:02 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:58.658 19:59:02 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:58.658 19:59:02 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:58.658 19:59:02 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:04:58.658 19:59:02 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:58.658 19:59:02 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:58.658 19:59:02 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:58.658 19:59:02 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:58.658 19:59:02 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:58.658 19:59:02 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:58.658 19:59:02 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:58.658 19:59:02 -- common/autotest_common.sh@1557 -- # continue 00:04:58.658 19:59:02 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:58.658 19:59:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:58.658 19:59:02 -- common/autotest_common.sh@10 -- # set +x 00:04:58.658 19:59:02 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:58.658 19:59:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:58.658 19:59:02 -- common/autotest_common.sh@10 -- # set +x 00:04:58.658 19:59:02 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:00.036 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:00.036 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:00.295 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:00.295 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:00.295 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:00.295 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:00.295 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:00.295 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:00.295 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:00.295 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:00.295 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:00.295 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:00.295 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:00.295 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:00.295 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:00.295 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:01.231 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:05:01.231 19:59:04 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:01.231 19:59:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:01.231 19:59:04 -- common/autotest_common.sh@10 -- # set +x 00:05:01.231 19:59:05 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:01.231 19:59:05 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:01.231 19:59:05 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:01.231 19:59:05 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:01.231 19:59:05 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:01.231 19:59:05 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:01.231 19:59:05 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:01.231 19:59:05 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:01.231 19:59:05 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:01.231 19:59:05 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:01.231 19:59:05 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:01.490 19:59:05 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:01.490 19:59:05 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:05:01.490 19:59:05 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:01.490 19:59:05 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:05:01.490 19:59:05 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:01.490 19:59:05 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:01.490 19:59:05 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:01.490 19:59:05 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:82:00.0 00:05:01.490 19:59:05 -- common/autotest_common.sh@1592 -- # [[ -z 0000:82:00.0 ]] 00:05:01.490 19:59:05 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1916979 00:05:01.490 19:59:05 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:01.490 19:59:05 -- common/autotest_common.sh@1598 -- # waitforlisten 1916979 00:05:01.490 19:59:05 -- common/autotest_common.sh@831 -- # '[' -z 1916979 ']' 00:05:01.490 19:59:05 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.490 19:59:05 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:01.490 19:59:05 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.490 19:59:05 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:01.490 19:59:05 -- common/autotest_common.sh@10 -- # set +x 00:05:01.491 [2024-07-24 19:59:05.191470] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:05:01.491 [2024-07-24 19:59:05.191659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1916979 ] 00:05:01.491 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.750 [2024-07-24 19:59:05.333509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.009 [2024-07-24 19:59:05.547447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.268 19:59:05 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.268 19:59:05 -- common/autotest_common.sh@864 -- # return 0 00:05:02.268 19:59:05 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:02.268 19:59:05 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:02.268 19:59:05 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:05:06.459 nvme0n1 00:05:06.459 19:59:09 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:06.459 [2024-07-24 19:59:09.843972] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:06.459 [2024-07-24 19:59:09.844073] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:06.459 request: 00:05:06.459 { 00:05:06.459 "nvme_ctrlr_name": "nvme0", 00:05:06.459 "password": "test", 00:05:06.459 "method": "bdev_nvme_opal_revert", 00:05:06.459 "req_id": 1 00:05:06.459 } 00:05:06.459 Got JSON-RPC error response 00:05:06.459 response: 00:05:06.459 { 00:05:06.459 "code": -32603, 00:05:06.459 "message": "Internal error" 00:05:06.459 } 00:05:06.459 19:59:09 -- common/autotest_common.sh@1604 -- # true 00:05:06.459 19:59:09 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:06.459 19:59:09 -- common/autotest_common.sh@1608 -- # killprocess 1916979 00:05:06.459 19:59:09 -- common/autotest_common.sh@950 -- # '[' -z 1916979 ']' 00:05:06.459 19:59:09 -- common/autotest_common.sh@954 -- # kill -0 1916979 00:05:06.459 19:59:09 -- common/autotest_common.sh@955 -- # uname 00:05:06.459 19:59:09 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:06.459 19:59:09 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1916979 00:05:06.459 19:59:09 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:06.459 19:59:09 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:06.459 19:59:09 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1916979' 00:05:06.459 killing process with pid 1916979 00:05:06.459 19:59:09 -- common/autotest_common.sh@969 -- # kill 1916979 00:05:06.459 19:59:09 -- common/autotest_common.sh@974 -- # wait 1916979 00:05:08.360 19:59:11 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:08.360 19:59:11 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:08.360 19:59:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:08.360 19:59:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:08.360 19:59:11 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:08.360 19:59:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:08.360 19:59:11 -- common/autotest_common.sh@10 -- # set +x 00:05:08.360 19:59:11 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:08.360 19:59:11 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:08.360 19:59:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.360 19:59:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.360 19:59:11 -- common/autotest_common.sh@10 -- # set +x 00:05:08.360 ************************************ 00:05:08.360 START TEST env 00:05:08.360 ************************************ 00:05:08.360 19:59:11 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:08.360 * Looking for test storage... 00:05:08.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:08.360 19:59:12 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:08.360 19:59:12 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.360 19:59:12 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.360 19:59:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.360 ************************************ 00:05:08.360 START TEST env_memory 00:05:08.360 ************************************ 00:05:08.360 19:59:12 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:08.360 00:05:08.360 00:05:08.360 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.360 http://cunit.sourceforge.net/ 00:05:08.360 00:05:08.360 00:05:08.360 Suite: memory 00:05:08.618 Test: alloc and free memory map ...[2024-07-24 19:59:12.150029] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:08.618 passed 00:05:08.618 Test: mem map translation ...[2024-07-24 19:59:12.205652] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:08.618 [2024-07-24 19:59:12.205715] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:08.618 [2024-07-24 19:59:12.205832] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:08.618 [2024-07-24 19:59:12.205866] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:08.618 passed 00:05:08.619 Test: mem map registration ...[2024-07-24 19:59:12.315870] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:08.619 [2024-07-24 19:59:12.315941] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:08.619 passed 00:05:08.877 Test: mem map adjacent registrations ...passed 00:05:08.877 00:05:08.877 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.877 suites 1 1 n/a 0 0 00:05:08.877 tests 4 4 4 0 0 00:05:08.877 asserts 152 152 152 0 n/a 00:05:08.877 00:05:08.877 Elapsed time = 0.365 seconds 00:05:08.877 00:05:08.877 real 0m0.380s 00:05:08.877 user 0m0.365s 00:05:08.877 sys 0m0.013s 00:05:08.877 19:59:12 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.877 19:59:12 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:08.877 ************************************ 00:05:08.877 END TEST env_memory 00:05:08.877 ************************************ 00:05:08.877 19:59:12 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:08.877 19:59:12 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.877 19:59:12 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.877 19:59:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.877 ************************************ 00:05:08.877 START TEST env_vtophys 00:05:08.877 ************************************ 00:05:08.877 19:59:12 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:08.877 EAL: lib.eal log level changed from notice to debug 00:05:08.877 EAL: Detected lcore 0 as core 0 on socket 0 00:05:08.877 EAL: Detected lcore 1 as core 1 on socket 0 00:05:08.877 EAL: Detected lcore 2 as core 2 on socket 0 00:05:08.877 EAL: Detected lcore 3 as core 3 on socket 0 00:05:08.877 EAL: Detected lcore 4 as core 4 on socket 0 00:05:08.877 EAL: Detected lcore 5 as core 5 on socket 0 00:05:08.877 EAL: Detected lcore 6 as core 8 on socket 0 00:05:08.877 EAL: Detected lcore 7 as core 9 on socket 0 00:05:08.877 EAL: Detected lcore 8 as core 10 on socket 0 00:05:08.877 EAL: Detected lcore 9 as core 11 on socket 0 00:05:08.877 EAL: Detected lcore 10 as core 12 on socket 0 00:05:08.877 EAL: Detected lcore 11 as core 13 on socket 0 00:05:08.877 EAL: Detected lcore 12 as core 0 on socket 1 00:05:08.877 EAL: Detected lcore 13 as core 1 on socket 1 00:05:08.877 EAL: Detected lcore 14 as core 2 on socket 1 00:05:08.877 EAL: Detected lcore 15 as core 3 on socket 1 00:05:08.877 EAL: Detected lcore 16 as core 4 on socket 1 00:05:08.877 EAL: Detected lcore 17 as core 5 on socket 1 00:05:08.877 EAL: Detected lcore 18 as core 8 on socket 1 00:05:08.877 EAL: Detected lcore 19 as core 9 on socket 1 00:05:08.877 EAL: Detected lcore 20 as core 10 on socket 1 00:05:08.877 EAL: Detected lcore 21 as core 11 on socket 1 00:05:08.877 EAL: Detected lcore 22 as core 12 on socket 1 00:05:08.877 EAL: Detected lcore 23 as core 13 on socket 1 00:05:08.877 EAL: Detected lcore 24 as core 0 on socket 0 00:05:08.877 EAL: Detected lcore 25 as core 1 on socket 0 00:05:08.877 EAL: Detected lcore 26 as core 2 on socket 0 00:05:08.877 EAL: Detected lcore 27 as core 3 on socket 0 00:05:08.877 EAL: Detected lcore 28 as core 4 on socket 0 00:05:08.877 EAL: Detected lcore 29 as core 5 on socket 0 00:05:08.877 EAL: Detected lcore 30 as core 8 on socket 0 00:05:08.877 EAL: Detected lcore 31 as core 9 on socket 0 00:05:08.877 EAL: Detected lcore 32 as core 10 on socket 0 00:05:08.877 EAL: Detected lcore 33 as core 11 on socket 0 00:05:08.877 EAL: Detected lcore 34 as core 12 on socket 0 00:05:08.877 EAL: Detected lcore 35 as core 13 on socket 0 00:05:08.877 EAL: Detected lcore 36 as core 0 on socket 1 00:05:08.877 EAL: Detected lcore 37 as core 1 on socket 1 00:05:08.877 EAL: Detected lcore 38 as core 2 on socket 1 00:05:08.877 EAL: Detected lcore 39 as core 3 on socket 1 00:05:08.877 EAL: Detected lcore 40 as core 4 on socket 1 00:05:08.877 EAL: Detected lcore 41 as core 5 on socket 1 00:05:08.877 EAL: Detected lcore 42 as core 8 on socket 1 00:05:08.877 EAL: Detected lcore 43 as core 9 on socket 1 00:05:08.877 EAL: Detected lcore 44 as core 10 on socket 1 00:05:08.877 EAL: Detected lcore 45 as core 11 on socket 1 00:05:08.877 EAL: Detected lcore 46 as core 12 on socket 1 00:05:08.877 EAL: Detected lcore 47 as core 13 on socket 1 00:05:08.877 EAL: Maximum logical cores by configuration: 128 00:05:08.877 EAL: Detected CPU lcores: 48 00:05:08.877 EAL: Detected NUMA nodes: 2 00:05:08.877 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:08.877 EAL: Detected shared linkage of DPDK 00:05:08.877 EAL: No shared files mode enabled, IPC will be disabled 00:05:08.877 EAL: Bus pci wants IOVA as 'DC' 00:05:08.877 EAL: Buses did not request a specific IOVA mode. 00:05:08.877 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:08.877 EAL: Selected IOVA mode 'VA' 00:05:08.877 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.877 EAL: Probing VFIO support... 00:05:08.877 EAL: IOMMU type 1 (Type 1) is supported 00:05:08.877 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:08.877 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:08.877 EAL: VFIO support initialized 00:05:08.877 EAL: Ask a virtual area of 0x2e000 bytes 00:05:08.877 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:08.877 EAL: Setting up physically contiguous memory... 00:05:08.877 EAL: Setting maximum number of open files to 524288 00:05:08.877 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:08.877 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:08.877 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:08.877 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.877 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:08.877 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.877 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.877 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:08.877 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:08.877 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.878 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:08.878 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.878 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.878 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:08.878 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:08.878 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.878 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:08.878 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.878 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.878 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:08.878 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:08.878 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.878 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:08.878 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.878 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.878 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:08.878 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:08.878 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:08.878 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.878 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:08.878 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.878 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.878 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:08.878 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:08.878 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.878 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:08.878 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.878 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.878 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:08.878 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:08.878 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.878 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:08.878 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.878 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.878 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:08.878 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:08.878 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.878 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:08.878 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.878 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.878 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:08.878 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:08.878 EAL: Hugepages will be freed exactly as allocated. 00:05:08.878 EAL: No shared files mode enabled, IPC is disabled 00:05:08.878 EAL: No shared files mode enabled, IPC is disabled 00:05:08.878 EAL: TSC frequency is ~2700000 KHz 00:05:08.878 EAL: Main lcore 0 is ready (tid=7f0372485a00;cpuset=[0]) 00:05:08.878 EAL: Trying to obtain current memory policy. 00:05:08.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.878 EAL: Restoring previous memory policy: 0 00:05:08.878 EAL: request: mp_malloc_sync 00:05:08.878 EAL: No shared files mode enabled, IPC is disabled 00:05:08.878 EAL: Heap on socket 0 was expanded by 2MB 00:05:08.878 EAL: No shared files mode enabled, IPC is disabled 00:05:09.136 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:09.137 EAL: Mem event callback 'spdk:(nil)' registered 00:05:09.137 00:05:09.137 00:05:09.137 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.137 http://cunit.sourceforge.net/ 00:05:09.137 00:05:09.137 00:05:09.137 Suite: components_suite 00:05:09.137 Test: vtophys_malloc_test ...passed 00:05:09.137 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:09.137 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.137 EAL: Restoring previous memory policy: 4 00:05:09.137 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.137 EAL: request: mp_malloc_sync 00:05:09.137 EAL: No shared files mode enabled, IPC is disabled 00:05:09.137 EAL: Heap on socket 0 was expanded by 4MB 00:05:09.137 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.137 EAL: request: mp_malloc_sync 00:05:09.137 EAL: No shared files mode enabled, IPC is disabled 00:05:09.137 EAL: Heap on socket 0 was shrunk by 4MB 00:05:09.137 EAL: Trying to obtain current memory policy. 00:05:09.137 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.137 EAL: Restoring previous memory policy: 4 00:05:09.137 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.137 EAL: request: mp_malloc_sync 00:05:09.137 EAL: No shared files mode enabled, IPC is disabled 00:05:09.137 EAL: Heap on socket 0 was expanded by 6MB 00:05:09.137 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.137 EAL: request: mp_malloc_sync 00:05:09.137 EAL: No shared files mode enabled, IPC is disabled 00:05:09.137 EAL: Heap on socket 0 was shrunk by 6MB 00:05:09.137 EAL: Trying to obtain current memory policy. 00:05:09.137 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.137 EAL: Restoring previous memory policy: 4 00:05:09.137 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.137 EAL: request: mp_malloc_sync 00:05:09.137 EAL: No shared files mode enabled, IPC is disabled 00:05:09.137 EAL: Heap on socket 0 was expanded by 10MB 00:05:09.137 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.137 EAL: request: mp_malloc_sync 00:05:09.137 EAL: No shared files mode enabled, IPC is disabled 00:05:09.137 EAL: Heap on socket 0 was shrunk by 10MB 00:05:09.137 EAL: Trying to obtain current memory policy. 00:05:09.137 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.137 EAL: Restoring previous memory policy: 4 00:05:09.137 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.137 EAL: request: mp_malloc_sync 00:05:09.137 EAL: No shared files mode enabled, IPC is disabled 00:05:09.137 EAL: Heap on socket 0 was expanded by 18MB 00:05:09.137 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.137 EAL: request: mp_malloc_sync 00:05:09.137 EAL: No shared files mode enabled, IPC is disabled 00:05:09.137 EAL: Heap on socket 0 was shrunk by 18MB 00:05:09.137 EAL: Trying to obtain current memory policy. 00:05:09.137 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.137 EAL: Restoring previous memory policy: 4 00:05:09.137 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.137 EAL: request: mp_malloc_sync 00:05:09.137 EAL: No shared files mode enabled, IPC is disabled 00:05:09.137 EAL: Heap on socket 0 was expanded by 34MB 00:05:09.137 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.137 EAL: request: mp_malloc_sync 00:05:09.137 EAL: No shared files mode enabled, IPC is disabled 00:05:09.137 EAL: Heap on socket 0 was shrunk by 34MB 00:05:09.137 EAL: Trying to obtain current memory policy. 00:05:09.137 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.137 EAL: Restoring previous memory policy: 4 00:05:09.137 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.137 EAL: request: mp_malloc_sync 00:05:09.137 EAL: No shared files mode enabled, IPC is disabled 00:05:09.137 EAL: Heap on socket 0 was expanded by 66MB 00:05:09.137 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.137 EAL: request: mp_malloc_sync 00:05:09.137 EAL: No shared files mode enabled, IPC is disabled 00:05:09.137 EAL: Heap on socket 0 was shrunk by 66MB 00:05:09.137 EAL: Trying to obtain current memory policy. 00:05:09.137 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.137 EAL: Restoring previous memory policy: 4 00:05:09.137 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.137 EAL: request: mp_malloc_sync 00:05:09.137 EAL: No shared files mode enabled, IPC is disabled 00:05:09.137 EAL: Heap on socket 0 was expanded by 130MB 00:05:09.137 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.137 EAL: request: mp_malloc_sync 00:05:09.137 EAL: No shared files mode enabled, IPC is disabled 00:05:09.137 EAL: Heap on socket 0 was shrunk by 130MB 00:05:09.137 EAL: Trying to obtain current memory policy. 00:05:09.137 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.395 EAL: Restoring previous memory policy: 4 00:05:09.395 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.395 EAL: request: mp_malloc_sync 00:05:09.395 EAL: No shared files mode enabled, IPC is disabled 00:05:09.395 EAL: Heap on socket 0 was expanded by 258MB 00:05:09.395 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.395 EAL: request: mp_malloc_sync 00:05:09.395 EAL: No shared files mode enabled, IPC is disabled 00:05:09.395 EAL: Heap on socket 0 was shrunk by 258MB 00:05:09.395 EAL: Trying to obtain current memory policy. 00:05:09.395 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.660 EAL: Restoring previous memory policy: 4 00:05:09.660 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.660 EAL: request: mp_malloc_sync 00:05:09.660 EAL: No shared files mode enabled, IPC is disabled 00:05:09.660 EAL: Heap on socket 0 was expanded by 514MB 00:05:09.917 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.917 EAL: request: mp_malloc_sync 00:05:09.917 EAL: No shared files mode enabled, IPC is disabled 00:05:09.917 EAL: Heap on socket 0 was shrunk by 514MB 00:05:09.917 EAL: Trying to obtain current memory policy. 00:05:09.917 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.482 EAL: Restoring previous memory policy: 4 00:05:10.483 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.483 EAL: request: mp_malloc_sync 00:05:10.483 EAL: No shared files mode enabled, IPC is disabled 00:05:10.483 EAL: Heap on socket 0 was expanded by 1026MB 00:05:10.763 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.060 EAL: request: mp_malloc_sync 00:05:11.060 EAL: No shared files mode enabled, IPC is disabled 00:05:11.060 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:11.060 passed 00:05:11.060 00:05:11.060 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.060 suites 1 1 n/a 0 0 00:05:11.060 tests 2 2 2 0 0 00:05:11.060 asserts 497 497 497 0 n/a 00:05:11.060 00:05:11.060 Elapsed time = 1.872 seconds 00:05:11.060 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.060 EAL: request: mp_malloc_sync 00:05:11.060 EAL: No shared files mode enabled, IPC is disabled 00:05:11.060 EAL: Heap on socket 0 was shrunk by 2MB 00:05:11.060 EAL: No shared files mode enabled, IPC is disabled 00:05:11.060 EAL: No shared files mode enabled, IPC is disabled 00:05:11.060 EAL: No shared files mode enabled, IPC is disabled 00:05:11.060 00:05:11.060 real 0m2.086s 00:05:11.060 user 0m1.053s 00:05:11.060 sys 0m0.986s 00:05:11.060 19:59:14 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.060 19:59:14 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:11.060 ************************************ 00:05:11.060 END TEST env_vtophys 00:05:11.060 ************************************ 00:05:11.061 19:59:14 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:11.061 19:59:14 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.061 19:59:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.061 19:59:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.061 ************************************ 00:05:11.061 START TEST env_pci 00:05:11.061 ************************************ 00:05:11.061 19:59:14 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:11.061 00:05:11.061 00:05:11.061 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.061 http://cunit.sourceforge.net/ 00:05:11.061 00:05:11.061 00:05:11.061 Suite: pci 00:05:11.061 Test: pci_hook ...[2024-07-24 19:59:14.693559] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1918131 has claimed it 00:05:11.061 EAL: Cannot find device (10000:00:01.0) 00:05:11.061 EAL: Failed to attach device on primary process 00:05:11.061 passed 00:05:11.061 00:05:11.061 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.061 suites 1 1 n/a 0 0 00:05:11.061 tests 1 1 1 0 0 00:05:11.061 asserts 25 25 25 0 n/a 00:05:11.061 00:05:11.061 Elapsed time = 0.023 seconds 00:05:11.061 00:05:11.061 real 0m0.037s 00:05:11.061 user 0m0.013s 00:05:11.061 sys 0m0.024s 00:05:11.061 19:59:14 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.061 19:59:14 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:11.061 ************************************ 00:05:11.061 END TEST env_pci 00:05:11.061 ************************************ 00:05:11.061 19:59:14 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:11.061 19:59:14 env -- env/env.sh@15 -- # uname 00:05:11.061 19:59:14 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:11.061 19:59:14 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:11.061 19:59:14 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:11.061 19:59:14 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:11.061 19:59:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.061 19:59:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.061 ************************************ 00:05:11.061 START TEST env_dpdk_post_init 00:05:11.061 ************************************ 00:05:11.061 19:59:14 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:11.061 EAL: Detected CPU lcores: 48 00:05:11.061 EAL: Detected NUMA nodes: 2 00:05:11.061 EAL: Detected shared linkage of DPDK 00:05:11.061 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:11.320 EAL: Selected IOVA mode 'VA' 00:05:11.320 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.320 EAL: VFIO support initialized 00:05:11.320 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:11.320 EAL: Using IOMMU type 1 (Type 1) 00:05:11.320 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:11.320 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:11.320 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:11.320 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:11.320 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:11.320 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:11.320 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:11.580 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:11.580 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:11.580 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:11.580 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:11.580 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:11.580 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:11.580 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:11.580 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:11.580 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:12.515 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:05:15.805 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:05:15.805 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:05:15.805 Starting DPDK initialization... 00:05:15.805 Starting SPDK post initialization... 00:05:15.805 SPDK NVMe probe 00:05:15.805 Attaching to 0000:82:00.0 00:05:15.805 Attached to 0000:82:00.0 00:05:15.805 Cleaning up... 00:05:15.805 00:05:15.805 real 0m4.575s 00:05:15.805 user 0m3.355s 00:05:15.805 sys 0m0.265s 00:05:15.805 19:59:19 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.805 19:59:19 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:15.805 ************************************ 00:05:15.805 END TEST env_dpdk_post_init 00:05:15.805 ************************************ 00:05:15.805 19:59:19 env -- env/env.sh@26 -- # uname 00:05:15.805 19:59:19 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:15.805 19:59:19 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:15.805 19:59:19 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.805 19:59:19 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.805 19:59:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.805 ************************************ 00:05:15.805 START TEST env_mem_callbacks 00:05:15.806 ************************************ 00:05:15.806 19:59:19 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:15.806 EAL: Detected CPU lcores: 48 00:05:15.806 EAL: Detected NUMA nodes: 2 00:05:15.806 EAL: Detected shared linkage of DPDK 00:05:15.806 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:15.806 EAL: Selected IOVA mode 'VA' 00:05:15.806 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.806 EAL: VFIO support initialized 00:05:15.806 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:15.806 00:05:15.806 00:05:15.806 CUnit - A unit testing framework for C - Version 2.1-3 00:05:15.806 http://cunit.sourceforge.net/ 00:05:15.806 00:05:15.806 00:05:15.806 Suite: memory 00:05:15.806 Test: test ... 00:05:15.806 register 0x200000200000 2097152 00:05:15.806 malloc 3145728 00:05:15.806 register 0x200000400000 4194304 00:05:15.806 buf 0x200000500000 len 3145728 PASSED 00:05:15.806 malloc 64 00:05:15.806 buf 0x2000004fff40 len 64 PASSED 00:05:15.806 malloc 4194304 00:05:15.806 register 0x200000800000 6291456 00:05:15.806 buf 0x200000a00000 len 4194304 PASSED 00:05:15.806 free 0x200000500000 3145728 00:05:15.806 free 0x2000004fff40 64 00:05:15.806 unregister 0x200000400000 4194304 PASSED 00:05:15.806 free 0x200000a00000 4194304 00:05:15.806 unregister 0x200000800000 6291456 PASSED 00:05:15.806 malloc 8388608 00:05:15.806 register 0x200000400000 10485760 00:05:15.806 buf 0x200000600000 len 8388608 PASSED 00:05:15.806 free 0x200000600000 8388608 00:05:15.806 unregister 0x200000400000 10485760 PASSED 00:05:15.806 passed 00:05:15.806 00:05:15.806 Run Summary: Type Total Ran Passed Failed Inactive 00:05:15.806 suites 1 1 n/a 0 0 00:05:15.806 tests 1 1 1 0 0 00:05:15.806 asserts 15 15 15 0 n/a 00:05:15.806 00:05:15.806 Elapsed time = 0.009 seconds 00:05:15.806 00:05:15.806 real 0m0.062s 00:05:15.806 user 0m0.020s 00:05:15.806 sys 0m0.042s 00:05:15.806 19:59:19 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.806 19:59:19 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:15.806 ************************************ 00:05:15.806 END TEST env_mem_callbacks 00:05:15.806 ************************************ 00:05:15.806 00:05:15.806 real 0m7.562s 00:05:15.806 user 0m4.962s 00:05:15.806 sys 0m1.614s 00:05:15.806 19:59:19 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.806 19:59:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.806 ************************************ 00:05:15.806 END TEST env 00:05:15.806 ************************************ 00:05:15.806 19:59:19 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:15.806 19:59:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.806 19:59:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.806 19:59:19 -- common/autotest_common.sh@10 -- # set +x 00:05:15.806 ************************************ 00:05:15.806 START TEST rpc 00:05:15.806 ************************************ 00:05:15.806 19:59:19 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:16.065 * Looking for test storage... 00:05:16.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:16.065 19:59:19 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1918788 00:05:16.065 19:59:19 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:16.065 19:59:19 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.065 19:59:19 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1918788 00:05:16.065 19:59:19 rpc -- common/autotest_common.sh@831 -- # '[' -z 1918788 ']' 00:05:16.065 19:59:19 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.065 19:59:19 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:16.065 19:59:19 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.065 19:59:19 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:16.065 19:59:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.065 [2024-07-24 19:59:19.759765] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:05:16.065 [2024-07-24 19:59:19.759949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1918788 ] 00:05:16.065 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.324 [2024-07-24 19:59:19.897531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.325 [2024-07-24 19:59:20.107597] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:16.325 [2024-07-24 19:59:20.107660] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1918788' to capture a snapshot of events at runtime. 00:05:16.325 [2024-07-24 19:59:20.107679] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:16.325 [2024-07-24 19:59:20.107703] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:16.325 [2024-07-24 19:59:20.107717] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1918788 for offline analysis/debug. 00:05:16.325 [2024-07-24 19:59:20.107759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.893 19:59:20 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:16.893 19:59:20 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:16.893 19:59:20 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:16.893 19:59:20 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:16.893 19:59:20 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:16.893 19:59:20 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:16.893 19:59:20 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.893 19:59:20 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.893 19:59:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.893 ************************************ 00:05:16.893 START TEST rpc_integrity 00:05:16.893 ************************************ 00:05:16.893 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:16.893 19:59:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:16.893 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.893 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.893 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.893 19:59:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:16.893 19:59:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:16.893 19:59:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:16.893 19:59:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:16.893 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.893 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.893 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.893 19:59:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:16.893 19:59:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:16.893 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.893 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.893 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.893 19:59:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:16.893 { 00:05:16.893 "name": "Malloc0", 00:05:16.893 "aliases": [ 00:05:16.893 "8d32e3fc-d476-4aad-b942-7431a9739644" 00:05:16.893 ], 00:05:16.893 "product_name": "Malloc disk", 00:05:16.893 "block_size": 512, 00:05:16.893 "num_blocks": 16384, 00:05:16.893 "uuid": "8d32e3fc-d476-4aad-b942-7431a9739644", 00:05:16.893 "assigned_rate_limits": { 00:05:16.893 "rw_ios_per_sec": 0, 00:05:16.893 "rw_mbytes_per_sec": 0, 00:05:16.893 "r_mbytes_per_sec": 0, 00:05:16.893 "w_mbytes_per_sec": 0 00:05:16.893 }, 00:05:16.893 "claimed": false, 00:05:16.893 "zoned": false, 00:05:16.893 "supported_io_types": { 00:05:16.893 "read": true, 00:05:16.893 "write": true, 00:05:16.893 "unmap": true, 00:05:16.893 "flush": true, 00:05:16.893 "reset": true, 00:05:16.893 "nvme_admin": false, 00:05:16.893 "nvme_io": false, 00:05:16.893 "nvme_io_md": false, 00:05:16.893 "write_zeroes": true, 00:05:16.893 "zcopy": true, 00:05:16.893 "get_zone_info": false, 00:05:16.893 "zone_management": false, 00:05:16.893 "zone_append": false, 00:05:16.893 "compare": false, 00:05:16.893 "compare_and_write": false, 00:05:16.893 "abort": true, 00:05:16.893 "seek_hole": false, 00:05:16.893 "seek_data": false, 00:05:16.893 "copy": true, 00:05:16.893 "nvme_iov_md": false 00:05:16.893 }, 00:05:16.893 "memory_domains": [ 00:05:16.893 { 00:05:16.893 "dma_device_id": "system", 00:05:16.893 "dma_device_type": 1 00:05:16.893 }, 00:05:16.893 { 00:05:16.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.893 "dma_device_type": 2 00:05:16.893 } 00:05:16.893 ], 00:05:16.893 "driver_specific": {} 00:05:16.893 } 00:05:16.893 ]' 00:05:17.152 19:59:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:17.152 19:59:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:17.152 19:59:20 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:17.152 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.152 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.153 [2024-07-24 19:59:20.725224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:17.153 [2024-07-24 19:59:20.725322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:17.153 [2024-07-24 19:59:20.725374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1da43e0 00:05:17.153 [2024-07-24 19:59:20.725408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:17.153 [2024-07-24 19:59:20.728596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:17.153 [2024-07-24 19:59:20.728630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:17.153 Passthru0 00:05:17.153 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.153 19:59:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:17.153 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.153 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.153 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.153 19:59:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:17.153 { 00:05:17.153 "name": "Malloc0", 00:05:17.153 "aliases": [ 00:05:17.153 "8d32e3fc-d476-4aad-b942-7431a9739644" 00:05:17.153 ], 00:05:17.153 "product_name": "Malloc disk", 00:05:17.153 "block_size": 512, 00:05:17.153 "num_blocks": 16384, 00:05:17.153 "uuid": "8d32e3fc-d476-4aad-b942-7431a9739644", 00:05:17.153 "assigned_rate_limits": { 00:05:17.153 "rw_ios_per_sec": 0, 00:05:17.153 "rw_mbytes_per_sec": 0, 00:05:17.153 "r_mbytes_per_sec": 0, 00:05:17.153 "w_mbytes_per_sec": 0 00:05:17.153 }, 00:05:17.153 "claimed": true, 00:05:17.153 "claim_type": "exclusive_write", 00:05:17.153 "zoned": false, 00:05:17.153 "supported_io_types": { 00:05:17.153 "read": true, 00:05:17.153 "write": true, 00:05:17.153 "unmap": true, 00:05:17.153 "flush": true, 00:05:17.153 "reset": true, 00:05:17.153 "nvme_admin": false, 00:05:17.153 "nvme_io": false, 00:05:17.153 "nvme_io_md": false, 00:05:17.153 "write_zeroes": true, 00:05:17.153 "zcopy": true, 00:05:17.153 "get_zone_info": false, 00:05:17.153 "zone_management": false, 00:05:17.153 "zone_append": false, 00:05:17.153 "compare": false, 00:05:17.153 "compare_and_write": false, 00:05:17.153 "abort": true, 00:05:17.153 "seek_hole": false, 00:05:17.153 "seek_data": false, 00:05:17.153 "copy": true, 00:05:17.153 "nvme_iov_md": false 00:05:17.153 }, 00:05:17.153 "memory_domains": [ 00:05:17.153 { 00:05:17.153 "dma_device_id": "system", 00:05:17.153 "dma_device_type": 1 00:05:17.153 }, 00:05:17.153 { 00:05:17.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.153 "dma_device_type": 2 00:05:17.153 } 00:05:17.153 ], 00:05:17.153 "driver_specific": {} 00:05:17.153 }, 00:05:17.153 { 00:05:17.153 "name": "Passthru0", 00:05:17.153 "aliases": [ 00:05:17.153 "a3ee200f-c19d-5ae8-97d9-10af15bd03b0" 00:05:17.153 ], 00:05:17.153 "product_name": "passthru", 00:05:17.153 "block_size": 512, 00:05:17.153 "num_blocks": 16384, 00:05:17.153 "uuid": "a3ee200f-c19d-5ae8-97d9-10af15bd03b0", 00:05:17.153 "assigned_rate_limits": { 00:05:17.153 "rw_ios_per_sec": 0, 00:05:17.153 "rw_mbytes_per_sec": 0, 00:05:17.153 "r_mbytes_per_sec": 0, 00:05:17.153 "w_mbytes_per_sec": 0 00:05:17.153 }, 00:05:17.153 "claimed": false, 00:05:17.153 "zoned": false, 00:05:17.153 "supported_io_types": { 00:05:17.153 "read": true, 00:05:17.153 "write": true, 00:05:17.153 "unmap": true, 00:05:17.153 "flush": true, 00:05:17.153 "reset": true, 00:05:17.153 "nvme_admin": false, 00:05:17.153 "nvme_io": false, 00:05:17.153 "nvme_io_md": false, 00:05:17.153 "write_zeroes": true, 00:05:17.153 "zcopy": true, 00:05:17.153 "get_zone_info": false, 00:05:17.153 "zone_management": false, 00:05:17.153 "zone_append": false, 00:05:17.153 "compare": false, 00:05:17.153 "compare_and_write": false, 00:05:17.153 "abort": true, 00:05:17.153 "seek_hole": false, 00:05:17.153 "seek_data": false, 00:05:17.153 "copy": true, 00:05:17.153 "nvme_iov_md": false 00:05:17.153 }, 00:05:17.153 "memory_domains": [ 00:05:17.153 { 00:05:17.153 "dma_device_id": "system", 00:05:17.153 "dma_device_type": 1 00:05:17.153 }, 00:05:17.153 { 00:05:17.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.153 "dma_device_type": 2 00:05:17.153 } 00:05:17.153 ], 00:05:17.153 "driver_specific": { 00:05:17.153 "passthru": { 00:05:17.153 "name": "Passthru0", 00:05:17.153 "base_bdev_name": "Malloc0" 00:05:17.153 } 00:05:17.153 } 00:05:17.153 } 00:05:17.153 ]' 00:05:17.153 19:59:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:17.153 19:59:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:17.153 19:59:20 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:17.153 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.153 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.153 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.153 19:59:20 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:17.153 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.153 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.153 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.153 19:59:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:17.153 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.153 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.153 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.153 19:59:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:17.153 19:59:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:17.153 19:59:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:17.153 00:05:17.153 real 0m0.376s 00:05:17.153 user 0m0.267s 00:05:17.153 sys 0m0.041s 00:05:17.153 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.153 19:59:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.153 ************************************ 00:05:17.153 END TEST rpc_integrity 00:05:17.153 ************************************ 00:05:17.412 19:59:20 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:17.412 19:59:20 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.412 19:59:20 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.412 19:59:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.412 ************************************ 00:05:17.412 START TEST rpc_plugins 00:05:17.412 ************************************ 00:05:17.412 19:59:21 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:17.412 19:59:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:17.412 19:59:21 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.412 19:59:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.412 19:59:21 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.412 19:59:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:17.412 19:59:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:17.412 19:59:21 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.412 19:59:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.412 19:59:21 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.412 19:59:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:17.412 { 00:05:17.412 "name": "Malloc1", 00:05:17.412 "aliases": [ 00:05:17.412 "85fa46e0-7585-43c5-9833-a13ee71b939f" 00:05:17.412 ], 00:05:17.412 "product_name": "Malloc disk", 00:05:17.412 "block_size": 4096, 00:05:17.412 "num_blocks": 256, 00:05:17.412 "uuid": "85fa46e0-7585-43c5-9833-a13ee71b939f", 00:05:17.412 "assigned_rate_limits": { 00:05:17.412 "rw_ios_per_sec": 0, 00:05:17.412 "rw_mbytes_per_sec": 0, 00:05:17.412 "r_mbytes_per_sec": 0, 00:05:17.412 "w_mbytes_per_sec": 0 00:05:17.412 }, 00:05:17.412 "claimed": false, 00:05:17.412 "zoned": false, 00:05:17.412 "supported_io_types": { 00:05:17.412 "read": true, 00:05:17.412 "write": true, 00:05:17.412 "unmap": true, 00:05:17.412 "flush": true, 00:05:17.412 "reset": true, 00:05:17.412 "nvme_admin": false, 00:05:17.412 "nvme_io": false, 00:05:17.412 "nvme_io_md": false, 00:05:17.412 "write_zeroes": true, 00:05:17.412 "zcopy": true, 00:05:17.412 "get_zone_info": false, 00:05:17.412 "zone_management": false, 00:05:17.412 "zone_append": false, 00:05:17.412 "compare": false, 00:05:17.412 "compare_and_write": false, 00:05:17.412 "abort": true, 00:05:17.412 "seek_hole": false, 00:05:17.412 "seek_data": false, 00:05:17.412 "copy": true, 00:05:17.412 "nvme_iov_md": false 00:05:17.412 }, 00:05:17.412 "memory_domains": [ 00:05:17.412 { 00:05:17.412 "dma_device_id": "system", 00:05:17.412 "dma_device_type": 1 00:05:17.412 }, 00:05:17.412 { 00:05:17.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.412 "dma_device_type": 2 00:05:17.413 } 00:05:17.413 ], 00:05:17.413 "driver_specific": {} 00:05:17.413 } 00:05:17.413 ]' 00:05:17.413 19:59:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:17.413 19:59:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:17.413 19:59:21 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:17.413 19:59:21 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.413 19:59:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.413 19:59:21 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.413 19:59:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:17.413 19:59:21 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.413 19:59:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.413 19:59:21 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.413 19:59:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:17.413 19:59:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:17.413 19:59:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:17.413 00:05:17.413 real 0m0.180s 00:05:17.413 user 0m0.127s 00:05:17.413 sys 0m0.017s 00:05:17.413 19:59:21 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.413 19:59:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.413 ************************************ 00:05:17.413 END TEST rpc_plugins 00:05:17.413 ************************************ 00:05:17.671 19:59:21 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:17.671 19:59:21 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.671 19:59:21 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.671 19:59:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.671 ************************************ 00:05:17.671 START TEST rpc_trace_cmd_test 00:05:17.671 ************************************ 00:05:17.671 19:59:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:17.671 19:59:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:17.671 19:59:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:17.671 19:59:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.671 19:59:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:17.671 19:59:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.671 19:59:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:17.672 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1918788", 00:05:17.672 "tpoint_group_mask": "0x8", 00:05:17.672 "iscsi_conn": { 00:05:17.672 "mask": "0x2", 00:05:17.672 "tpoint_mask": "0x0" 00:05:17.672 }, 00:05:17.672 "scsi": { 00:05:17.672 "mask": "0x4", 00:05:17.672 "tpoint_mask": "0x0" 00:05:17.672 }, 00:05:17.672 "bdev": { 00:05:17.672 "mask": "0x8", 00:05:17.672 "tpoint_mask": "0xffffffffffffffff" 00:05:17.672 }, 00:05:17.672 "nvmf_rdma": { 00:05:17.672 "mask": "0x10", 00:05:17.672 "tpoint_mask": "0x0" 00:05:17.672 }, 00:05:17.672 "nvmf_tcp": { 00:05:17.672 "mask": "0x20", 00:05:17.672 "tpoint_mask": "0x0" 00:05:17.672 }, 00:05:17.672 "ftl": { 00:05:17.672 "mask": "0x40", 00:05:17.672 "tpoint_mask": "0x0" 00:05:17.672 }, 00:05:17.672 "blobfs": { 00:05:17.672 "mask": "0x80", 00:05:17.672 "tpoint_mask": "0x0" 00:05:17.672 }, 00:05:17.672 "dsa": { 00:05:17.672 "mask": "0x200", 00:05:17.672 "tpoint_mask": "0x0" 00:05:17.672 }, 00:05:17.672 "thread": { 00:05:17.672 "mask": "0x400", 00:05:17.672 "tpoint_mask": "0x0" 00:05:17.672 }, 00:05:17.672 "nvme_pcie": { 00:05:17.672 "mask": "0x800", 00:05:17.672 "tpoint_mask": "0x0" 00:05:17.672 }, 00:05:17.672 "iaa": { 00:05:17.672 "mask": "0x1000", 00:05:17.672 "tpoint_mask": "0x0" 00:05:17.672 }, 00:05:17.672 "nvme_tcp": { 00:05:17.672 "mask": "0x2000", 00:05:17.672 "tpoint_mask": "0x0" 00:05:17.672 }, 00:05:17.672 "bdev_nvme": { 00:05:17.672 "mask": "0x4000", 00:05:17.672 "tpoint_mask": "0x0" 00:05:17.672 }, 00:05:17.672 "sock": { 00:05:17.672 "mask": "0x8000", 00:05:17.672 "tpoint_mask": "0x0" 00:05:17.672 } 00:05:17.672 }' 00:05:17.672 19:59:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:17.672 19:59:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:17.672 19:59:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:17.672 19:59:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:17.672 19:59:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:17.930 19:59:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:17.930 19:59:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:17.930 19:59:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:17.930 19:59:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:17.930 19:59:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:17.930 00:05:17.930 real 0m0.342s 00:05:17.930 user 0m0.308s 00:05:17.930 sys 0m0.025s 00:05:17.930 19:59:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.930 19:59:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:17.930 ************************************ 00:05:17.930 END TEST rpc_trace_cmd_test 00:05:17.930 ************************************ 00:05:17.930 19:59:21 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:17.930 19:59:21 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:17.930 19:59:21 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:17.930 19:59:21 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.930 19:59:21 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.930 19:59:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.930 ************************************ 00:05:17.930 START TEST rpc_daemon_integrity 00:05:17.930 ************************************ 00:05:17.930 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:17.930 19:59:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:17.930 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.930 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.930 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.930 19:59:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:17.930 19:59:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:18.189 19:59:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:18.189 19:59:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:18.189 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.189 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.189 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.189 19:59:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:18.189 19:59:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:18.189 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.189 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.189 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.189 19:59:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:18.189 { 00:05:18.189 "name": "Malloc2", 00:05:18.189 "aliases": [ 00:05:18.189 "c61de97c-9838-4344-b795-e4883ba69344" 00:05:18.189 ], 00:05:18.189 "product_name": "Malloc disk", 00:05:18.189 "block_size": 512, 00:05:18.189 "num_blocks": 16384, 00:05:18.189 "uuid": "c61de97c-9838-4344-b795-e4883ba69344", 00:05:18.189 "assigned_rate_limits": { 00:05:18.189 "rw_ios_per_sec": 0, 00:05:18.189 "rw_mbytes_per_sec": 0, 00:05:18.189 "r_mbytes_per_sec": 0, 00:05:18.189 "w_mbytes_per_sec": 0 00:05:18.189 }, 00:05:18.189 "claimed": false, 00:05:18.189 "zoned": false, 00:05:18.189 "supported_io_types": { 00:05:18.189 "read": true, 00:05:18.189 "write": true, 00:05:18.189 "unmap": true, 00:05:18.189 "flush": true, 00:05:18.189 "reset": true, 00:05:18.189 "nvme_admin": false, 00:05:18.189 "nvme_io": false, 00:05:18.189 "nvme_io_md": false, 00:05:18.189 "write_zeroes": true, 00:05:18.189 "zcopy": true, 00:05:18.189 "get_zone_info": false, 00:05:18.189 "zone_management": false, 00:05:18.189 "zone_append": false, 00:05:18.189 "compare": false, 00:05:18.189 "compare_and_write": false, 00:05:18.189 "abort": true, 00:05:18.189 "seek_hole": false, 00:05:18.189 "seek_data": false, 00:05:18.189 "copy": true, 00:05:18.189 "nvme_iov_md": false 00:05:18.189 }, 00:05:18.189 "memory_domains": [ 00:05:18.189 { 00:05:18.189 "dma_device_id": "system", 00:05:18.189 "dma_device_type": 1 00:05:18.189 }, 00:05:18.189 { 00:05:18.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.189 "dma_device_type": 2 00:05:18.189 } 00:05:18.189 ], 00:05:18.189 "driver_specific": {} 00:05:18.189 } 00:05:18.189 ]' 00:05:18.189 19:59:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:18.189 19:59:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:18.189 19:59:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:18.189 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.189 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.189 [2024-07-24 19:59:21.857144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:18.189 [2024-07-24 19:59:21.857241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:18.189 [2024-07-24 19:59:21.857315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1da4610 00:05:18.189 [2024-07-24 19:59:21.857354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:18.189 [2024-07-24 19:59:21.859968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:18.189 [2024-07-24 19:59:21.860049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:18.189 Passthru0 00:05:18.189 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.189 19:59:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:18.189 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.189 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.189 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.189 19:59:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:18.189 { 00:05:18.189 "name": "Malloc2", 00:05:18.189 "aliases": [ 00:05:18.189 "c61de97c-9838-4344-b795-e4883ba69344" 00:05:18.189 ], 00:05:18.189 "product_name": "Malloc disk", 00:05:18.189 "block_size": 512, 00:05:18.189 "num_blocks": 16384, 00:05:18.189 "uuid": "c61de97c-9838-4344-b795-e4883ba69344", 00:05:18.189 "assigned_rate_limits": { 00:05:18.189 "rw_ios_per_sec": 0, 00:05:18.189 "rw_mbytes_per_sec": 0, 00:05:18.189 "r_mbytes_per_sec": 0, 00:05:18.189 "w_mbytes_per_sec": 0 00:05:18.189 }, 00:05:18.189 "claimed": true, 00:05:18.189 "claim_type": "exclusive_write", 00:05:18.189 "zoned": false, 00:05:18.189 "supported_io_types": { 00:05:18.189 "read": true, 00:05:18.189 "write": true, 00:05:18.189 "unmap": true, 00:05:18.189 "flush": true, 00:05:18.189 "reset": true, 00:05:18.189 "nvme_admin": false, 00:05:18.189 "nvme_io": false, 00:05:18.189 "nvme_io_md": false, 00:05:18.189 "write_zeroes": true, 00:05:18.189 "zcopy": true, 00:05:18.189 "get_zone_info": false, 00:05:18.189 "zone_management": false, 00:05:18.189 "zone_append": false, 00:05:18.189 "compare": false, 00:05:18.189 "compare_and_write": false, 00:05:18.189 "abort": true, 00:05:18.189 "seek_hole": false, 00:05:18.189 "seek_data": false, 00:05:18.189 "copy": true, 00:05:18.189 "nvme_iov_md": false 00:05:18.189 }, 00:05:18.189 "memory_domains": [ 00:05:18.189 { 00:05:18.189 "dma_device_id": "system", 00:05:18.189 "dma_device_type": 1 00:05:18.189 }, 00:05:18.189 { 00:05:18.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.189 "dma_device_type": 2 00:05:18.189 } 00:05:18.189 ], 00:05:18.189 "driver_specific": {} 00:05:18.190 }, 00:05:18.190 { 00:05:18.190 "name": "Passthru0", 00:05:18.190 "aliases": [ 00:05:18.190 "1993316b-7e6d-5862-bca9-7a89752489d5" 00:05:18.190 ], 00:05:18.190 "product_name": "passthru", 00:05:18.190 "block_size": 512, 00:05:18.190 "num_blocks": 16384, 00:05:18.190 "uuid": "1993316b-7e6d-5862-bca9-7a89752489d5", 00:05:18.190 "assigned_rate_limits": { 00:05:18.190 "rw_ios_per_sec": 0, 00:05:18.190 "rw_mbytes_per_sec": 0, 00:05:18.190 "r_mbytes_per_sec": 0, 00:05:18.190 "w_mbytes_per_sec": 0 00:05:18.190 }, 00:05:18.190 "claimed": false, 00:05:18.190 "zoned": false, 00:05:18.190 "supported_io_types": { 00:05:18.190 "read": true, 00:05:18.190 "write": true, 00:05:18.190 "unmap": true, 00:05:18.190 "flush": true, 00:05:18.190 "reset": true, 00:05:18.190 "nvme_admin": false, 00:05:18.190 "nvme_io": false, 00:05:18.190 "nvme_io_md": false, 00:05:18.190 "write_zeroes": true, 00:05:18.190 "zcopy": true, 00:05:18.190 "get_zone_info": false, 00:05:18.190 "zone_management": false, 00:05:18.190 "zone_append": false, 00:05:18.190 "compare": false, 00:05:18.190 "compare_and_write": false, 00:05:18.190 "abort": true, 00:05:18.190 "seek_hole": false, 00:05:18.190 "seek_data": false, 00:05:18.190 "copy": true, 00:05:18.190 "nvme_iov_md": false 00:05:18.190 }, 00:05:18.190 "memory_domains": [ 00:05:18.190 { 00:05:18.190 "dma_device_id": "system", 00:05:18.190 "dma_device_type": 1 00:05:18.190 }, 00:05:18.190 { 00:05:18.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.190 "dma_device_type": 2 00:05:18.190 } 00:05:18.190 ], 00:05:18.190 "driver_specific": { 00:05:18.190 "passthru": { 00:05:18.190 "name": "Passthru0", 00:05:18.190 "base_bdev_name": "Malloc2" 00:05:18.190 } 00:05:18.190 } 00:05:18.190 } 00:05:18.190 ]' 00:05:18.190 19:59:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:18.190 19:59:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:18.190 19:59:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:18.190 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.190 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.190 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.190 19:59:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:18.190 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.190 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.190 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.190 19:59:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:18.190 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.190 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.190 19:59:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.190 19:59:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:18.190 19:59:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:18.448 19:59:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:18.448 00:05:18.448 real 0m0.387s 00:05:18.448 user 0m0.273s 00:05:18.448 sys 0m0.042s 00:05:18.448 19:59:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.448 19:59:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.448 ************************************ 00:05:18.448 END TEST rpc_daemon_integrity 00:05:18.448 ************************************ 00:05:18.448 19:59:22 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:18.448 19:59:22 rpc -- rpc/rpc.sh@84 -- # killprocess 1918788 00:05:18.448 19:59:22 rpc -- common/autotest_common.sh@950 -- # '[' -z 1918788 ']' 00:05:18.448 19:59:22 rpc -- common/autotest_common.sh@954 -- # kill -0 1918788 00:05:18.448 19:59:22 rpc -- common/autotest_common.sh@955 -- # uname 00:05:18.448 19:59:22 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:18.448 19:59:22 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1918788 00:05:18.448 19:59:22 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:18.448 19:59:22 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:18.448 19:59:22 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1918788' 00:05:18.448 killing process with pid 1918788 00:05:18.448 19:59:22 rpc -- common/autotest_common.sh@969 -- # kill 1918788 00:05:18.448 19:59:22 rpc -- common/autotest_common.sh@974 -- # wait 1918788 00:05:19.015 00:05:19.015 real 0m3.153s 00:05:19.015 user 0m4.068s 00:05:19.015 sys 0m0.984s 00:05:19.015 19:59:22 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.015 19:59:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.015 ************************************ 00:05:19.015 END TEST rpc 00:05:19.015 ************************************ 00:05:19.015 19:59:22 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:19.015 19:59:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.015 19:59:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.015 19:59:22 -- common/autotest_common.sh@10 -- # set +x 00:05:19.274 ************************************ 00:05:19.274 START TEST skip_rpc 00:05:19.274 ************************************ 00:05:19.274 19:59:22 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:19.274 * Looking for test storage... 00:05:19.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:19.274 19:59:22 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:19.274 19:59:22 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:19.274 19:59:22 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:19.274 19:59:22 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.274 19:59:22 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.274 19:59:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.274 ************************************ 00:05:19.274 START TEST skip_rpc 00:05:19.274 ************************************ 00:05:19.274 19:59:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:19.274 19:59:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1919358 00:05:19.274 19:59:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:19.274 19:59:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.274 19:59:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:19.274 [2024-07-24 19:59:22.983972] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:05:19.274 [2024-07-24 19:59:22.984070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1919358 ] 00:05:19.274 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.533 [2024-07-24 19:59:23.087217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.533 [2024-07-24 19:59:23.298730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1919358 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1919358 ']' 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1919358 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1919358 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1919358' 00:05:24.800 killing process with pid 1919358 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1919358 00:05:24.800 19:59:27 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1919358 00:05:24.800 00:05:24.800 real 0m5.669s 00:05:24.800 user 0m5.173s 00:05:24.800 sys 0m0.495s 00:05:24.800 19:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.058 19:59:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.058 ************************************ 00:05:25.058 END TEST skip_rpc 00:05:25.058 ************************************ 00:05:25.058 19:59:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:25.058 19:59:28 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.058 19:59:28 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.058 19:59:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.058 ************************************ 00:05:25.058 START TEST skip_rpc_with_json 00:05:25.058 ************************************ 00:05:25.058 19:59:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:25.059 19:59:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:25.059 19:59:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1920051 00:05:25.059 19:59:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.059 19:59:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.059 19:59:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1920051 00:05:25.059 19:59:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1920051 ']' 00:05:25.059 19:59:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.059 19:59:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.059 19:59:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.059 19:59:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.059 19:59:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.059 [2024-07-24 19:59:28.694552] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:05:25.059 [2024-07-24 19:59:28.694650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1920051 ] 00:05:25.059 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.059 [2024-07-24 19:59:28.795514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.317 [2024-07-24 19:59:29.004910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.253 19:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.253 19:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:26.253 19:59:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:26.253 19:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.253 19:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.253 [2024-07-24 19:59:29.910176] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:26.253 request: 00:05:26.253 { 00:05:26.253 "trtype": "tcp", 00:05:26.253 "method": "nvmf_get_transports", 00:05:26.253 "req_id": 1 00:05:26.253 } 00:05:26.253 Got JSON-RPC error response 00:05:26.253 response: 00:05:26.253 { 00:05:26.253 "code": -19, 00:05:26.253 "message": "No such device" 00:05:26.253 } 00:05:26.253 19:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:26.253 19:59:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:26.253 19:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.253 19:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.253 [2024-07-24 19:59:29.922474] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:26.253 19:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.253 19:59:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:26.253 19:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.253 19:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.512 19:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.512 19:59:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:26.512 { 00:05:26.512 "subsystems": [ 00:05:26.512 { 00:05:26.512 "subsystem": "vfio_user_target", 00:05:26.512 "config": null 00:05:26.512 }, 00:05:26.512 { 00:05:26.512 "subsystem": "keyring", 00:05:26.512 "config": [] 00:05:26.512 }, 00:05:26.512 { 00:05:26.512 "subsystem": "iobuf", 00:05:26.512 "config": [ 00:05:26.512 { 00:05:26.512 "method": "iobuf_set_options", 00:05:26.512 "params": { 00:05:26.512 "small_pool_count": 8192, 00:05:26.512 "large_pool_count": 1024, 00:05:26.512 "small_bufsize": 8192, 00:05:26.512 "large_bufsize": 135168 00:05:26.512 } 00:05:26.512 } 00:05:26.512 ] 00:05:26.512 }, 00:05:26.512 { 00:05:26.512 "subsystem": "sock", 00:05:26.512 "config": [ 00:05:26.512 { 00:05:26.512 "method": "sock_set_default_impl", 00:05:26.512 "params": { 00:05:26.512 "impl_name": "posix" 00:05:26.512 } 00:05:26.512 }, 00:05:26.512 { 00:05:26.512 "method": "sock_impl_set_options", 00:05:26.512 "params": { 00:05:26.512 "impl_name": "ssl", 00:05:26.512 "recv_buf_size": 4096, 00:05:26.512 "send_buf_size": 4096, 00:05:26.512 "enable_recv_pipe": true, 00:05:26.512 "enable_quickack": false, 00:05:26.512 "enable_placement_id": 0, 00:05:26.512 "enable_zerocopy_send_server": true, 00:05:26.512 "enable_zerocopy_send_client": false, 00:05:26.512 "zerocopy_threshold": 0, 00:05:26.512 "tls_version": 0, 00:05:26.512 "enable_ktls": false 00:05:26.512 } 00:05:26.512 }, 00:05:26.512 { 00:05:26.512 "method": "sock_impl_set_options", 00:05:26.512 "params": { 00:05:26.512 "impl_name": "posix", 00:05:26.512 "recv_buf_size": 2097152, 00:05:26.512 "send_buf_size": 2097152, 00:05:26.512 "enable_recv_pipe": true, 00:05:26.512 "enable_quickack": false, 00:05:26.512 "enable_placement_id": 0, 00:05:26.512 "enable_zerocopy_send_server": true, 00:05:26.512 "enable_zerocopy_send_client": false, 00:05:26.512 "zerocopy_threshold": 0, 00:05:26.512 "tls_version": 0, 00:05:26.512 "enable_ktls": false 00:05:26.512 } 00:05:26.512 } 00:05:26.512 ] 00:05:26.512 }, 00:05:26.512 { 00:05:26.512 "subsystem": "vmd", 00:05:26.512 "config": [] 00:05:26.512 }, 00:05:26.512 { 00:05:26.512 "subsystem": "accel", 00:05:26.512 "config": [ 00:05:26.512 { 00:05:26.512 "method": "accel_set_options", 00:05:26.512 "params": { 00:05:26.512 "small_cache_size": 128, 00:05:26.512 "large_cache_size": 16, 00:05:26.512 "task_count": 2048, 00:05:26.512 "sequence_count": 2048, 00:05:26.512 "buf_count": 2048 00:05:26.512 } 00:05:26.512 } 00:05:26.512 ] 00:05:26.512 }, 00:05:26.512 { 00:05:26.512 "subsystem": "bdev", 00:05:26.512 "config": [ 00:05:26.512 { 00:05:26.512 "method": "bdev_set_options", 00:05:26.512 "params": { 00:05:26.512 "bdev_io_pool_size": 65535, 00:05:26.512 "bdev_io_cache_size": 256, 00:05:26.512 "bdev_auto_examine": true, 00:05:26.512 "iobuf_small_cache_size": 128, 00:05:26.512 "iobuf_large_cache_size": 16 00:05:26.512 } 00:05:26.512 }, 00:05:26.512 { 00:05:26.512 "method": "bdev_raid_set_options", 00:05:26.512 "params": { 00:05:26.512 "process_window_size_kb": 1024, 00:05:26.512 "process_max_bandwidth_mb_sec": 0 00:05:26.512 } 00:05:26.512 }, 00:05:26.512 { 00:05:26.512 "method": "bdev_iscsi_set_options", 00:05:26.512 "params": { 00:05:26.512 "timeout_sec": 30 00:05:26.512 } 00:05:26.512 }, 00:05:26.512 { 00:05:26.512 "method": "bdev_nvme_set_options", 00:05:26.512 "params": { 00:05:26.512 "action_on_timeout": "none", 00:05:26.512 "timeout_us": 0, 00:05:26.512 "timeout_admin_us": 0, 00:05:26.512 "keep_alive_timeout_ms": 10000, 00:05:26.512 "arbitration_burst": 0, 00:05:26.512 "low_priority_weight": 0, 00:05:26.512 "medium_priority_weight": 0, 00:05:26.512 "high_priority_weight": 0, 00:05:26.512 "nvme_adminq_poll_period_us": 10000, 00:05:26.512 "nvme_ioq_poll_period_us": 0, 00:05:26.512 "io_queue_requests": 0, 00:05:26.512 "delay_cmd_submit": true, 00:05:26.512 "transport_retry_count": 4, 00:05:26.512 "bdev_retry_count": 3, 00:05:26.512 "transport_ack_timeout": 0, 00:05:26.512 "ctrlr_loss_timeout_sec": 0, 00:05:26.512 "reconnect_delay_sec": 0, 00:05:26.512 "fast_io_fail_timeout_sec": 0, 00:05:26.512 "disable_auto_failback": false, 00:05:26.512 "generate_uuids": false, 00:05:26.512 "transport_tos": 0, 00:05:26.512 "nvme_error_stat": false, 00:05:26.512 "rdma_srq_size": 0, 00:05:26.512 "io_path_stat": false, 00:05:26.512 "allow_accel_sequence": false, 00:05:26.512 "rdma_max_cq_size": 0, 00:05:26.512 "rdma_cm_event_timeout_ms": 0, 00:05:26.512 "dhchap_digests": [ 00:05:26.512 "sha256", 00:05:26.512 "sha384", 00:05:26.512 "sha512" 00:05:26.512 ], 00:05:26.512 "dhchap_dhgroups": [ 00:05:26.512 "null", 00:05:26.512 "ffdhe2048", 00:05:26.512 "ffdhe3072", 00:05:26.512 "ffdhe4096", 00:05:26.512 "ffdhe6144", 00:05:26.512 "ffdhe8192" 00:05:26.512 ] 00:05:26.512 } 00:05:26.512 }, 00:05:26.512 { 00:05:26.512 "method": "bdev_nvme_set_hotplug", 00:05:26.512 "params": { 00:05:26.512 "period_us": 100000, 00:05:26.512 "enable": false 00:05:26.512 } 00:05:26.512 }, 00:05:26.512 { 00:05:26.512 "method": "bdev_wait_for_examine" 00:05:26.512 } 00:05:26.512 ] 00:05:26.512 }, 00:05:26.512 { 00:05:26.512 "subsystem": "scsi", 00:05:26.512 "config": null 00:05:26.512 }, 00:05:26.512 { 00:05:26.512 "subsystem": "scheduler", 00:05:26.512 "config": [ 00:05:26.512 { 00:05:26.512 "method": "framework_set_scheduler", 00:05:26.512 "params": { 00:05:26.512 "name": "static" 00:05:26.512 } 00:05:26.512 } 00:05:26.512 ] 00:05:26.512 }, 00:05:26.512 { 00:05:26.512 "subsystem": "vhost_scsi", 00:05:26.512 "config": [] 00:05:26.512 }, 00:05:26.512 { 00:05:26.512 "subsystem": "vhost_blk", 00:05:26.512 "config": [] 00:05:26.512 }, 00:05:26.512 { 00:05:26.512 "subsystem": "ublk", 00:05:26.512 "config": [] 00:05:26.512 }, 00:05:26.512 { 00:05:26.512 "subsystem": "nbd", 00:05:26.512 "config": [] 00:05:26.512 }, 00:05:26.512 { 00:05:26.512 "subsystem": "nvmf", 00:05:26.512 "config": [ 00:05:26.512 { 00:05:26.512 "method": "nvmf_set_config", 00:05:26.512 "params": { 00:05:26.512 "discovery_filter": "match_any", 00:05:26.512 "admin_cmd_passthru": { 00:05:26.512 "identify_ctrlr": false 00:05:26.512 } 00:05:26.512 } 00:05:26.512 }, 00:05:26.512 { 00:05:26.512 "method": "nvmf_set_max_subsystems", 00:05:26.512 "params": { 00:05:26.512 "max_subsystems": 1024 00:05:26.512 } 00:05:26.512 }, 00:05:26.512 { 00:05:26.512 "method": "nvmf_set_crdt", 00:05:26.512 "params": { 00:05:26.512 "crdt1": 0, 00:05:26.512 "crdt2": 0, 00:05:26.512 "crdt3": 0 00:05:26.512 } 00:05:26.512 }, 00:05:26.512 { 00:05:26.512 "method": "nvmf_create_transport", 00:05:26.512 "params": { 00:05:26.512 "trtype": "TCP", 00:05:26.512 "max_queue_depth": 128, 00:05:26.512 "max_io_qpairs_per_ctrlr": 127, 00:05:26.512 "in_capsule_data_size": 4096, 00:05:26.512 "max_io_size": 131072, 00:05:26.512 "io_unit_size": 131072, 00:05:26.512 "max_aq_depth": 128, 00:05:26.512 "num_shared_buffers": 511, 00:05:26.512 "buf_cache_size": 4294967295, 00:05:26.512 "dif_insert_or_strip": false, 00:05:26.512 "zcopy": false, 00:05:26.512 "c2h_success": true, 00:05:26.512 "sock_priority": 0, 00:05:26.512 "abort_timeout_sec": 1, 00:05:26.512 "ack_timeout": 0, 00:05:26.512 "data_wr_pool_size": 0 00:05:26.512 } 00:05:26.512 } 00:05:26.512 ] 00:05:26.512 }, 00:05:26.512 { 00:05:26.512 "subsystem": "iscsi", 00:05:26.512 "config": [ 00:05:26.512 { 00:05:26.512 "method": "iscsi_set_options", 00:05:26.512 "params": { 00:05:26.512 "node_base": "iqn.2016-06.io.spdk", 00:05:26.512 "max_sessions": 128, 00:05:26.513 "max_connections_per_session": 2, 00:05:26.513 "max_queue_depth": 64, 00:05:26.513 "default_time2wait": 2, 00:05:26.513 "default_time2retain": 20, 00:05:26.513 "first_burst_length": 8192, 00:05:26.513 "immediate_data": true, 00:05:26.513 "allow_duplicated_isid": false, 00:05:26.513 "error_recovery_level": 0, 00:05:26.513 "nop_timeout": 60, 00:05:26.513 "nop_in_interval": 30, 00:05:26.513 "disable_chap": false, 00:05:26.513 "require_chap": false, 00:05:26.513 "mutual_chap": false, 00:05:26.513 "chap_group": 0, 00:05:26.513 "max_large_datain_per_connection": 64, 00:05:26.513 "max_r2t_per_connection": 4, 00:05:26.513 "pdu_pool_size": 36864, 00:05:26.513 "immediate_data_pool_size": 16384, 00:05:26.513 "data_out_pool_size": 2048 00:05:26.513 } 00:05:26.513 } 00:05:26.513 ] 00:05:26.513 } 00:05:26.513 ] 00:05:26.513 } 00:05:26.513 19:59:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:26.513 19:59:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1920051 00:05:26.513 19:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1920051 ']' 00:05:26.513 19:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1920051 00:05:26.513 19:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:26.513 19:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:26.513 19:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1920051 00:05:26.513 19:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:26.513 19:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:26.513 19:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1920051' 00:05:26.513 killing process with pid 1920051 00:05:26.513 19:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1920051 00:05:26.513 19:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1920051 00:05:27.080 19:59:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1920322 00:05:27.080 19:59:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:27.080 19:59:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:32.372 19:59:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1920322 00:05:32.372 19:59:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1920322 ']' 00:05:32.372 19:59:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1920322 00:05:32.372 19:59:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:32.372 19:59:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.373 19:59:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1920322 00:05:32.373 19:59:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.373 19:59:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.373 19:59:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1920322' 00:05:32.373 killing process with pid 1920322 00:05:32.373 19:59:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1920322 00:05:32.373 19:59:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1920322 00:05:32.940 19:59:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:32.940 19:59:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:32.941 00:05:32.941 real 0m7.801s 00:05:32.941 user 0m7.413s 00:05:32.941 sys 0m1.195s 00:05:32.941 19:59:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.941 19:59:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.941 ************************************ 00:05:32.941 END TEST skip_rpc_with_json 00:05:32.941 ************************************ 00:05:32.941 19:59:36 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:32.941 19:59:36 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.941 19:59:36 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.941 19:59:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.941 ************************************ 00:05:32.941 START TEST skip_rpc_with_delay 00:05:32.941 ************************************ 00:05:32.941 19:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:32.941 19:59:36 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:32.941 19:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:32.941 19:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:32.941 19:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.941 19:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.941 19:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.941 19:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.941 19:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.941 19:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.941 19:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.941 19:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:32.941 19:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:32.941 [2024-07-24 19:59:36.606472] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:32.941 [2024-07-24 19:59:36.606616] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:32.941 19:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:32.941 19:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:32.941 19:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:32.941 19:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:32.941 00:05:32.941 real 0m0.142s 00:05:32.941 user 0m0.088s 00:05:32.941 sys 0m0.052s 00:05:32.941 19:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.941 19:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:32.941 ************************************ 00:05:32.941 END TEST skip_rpc_with_delay 00:05:32.941 ************************************ 00:05:32.941 19:59:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:32.941 19:59:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:32.941 19:59:36 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:32.941 19:59:36 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.941 19:59:36 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.941 19:59:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.941 ************************************ 00:05:32.941 START TEST exit_on_failed_rpc_init 00:05:32.941 ************************************ 00:05:32.941 19:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:32.941 19:59:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1921045 00:05:32.941 19:59:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.941 19:59:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1921045 00:05:32.941 19:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1921045 ']' 00:05:32.941 19:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.941 19:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.941 19:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.941 19:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.941 19:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:33.200 [2024-07-24 19:59:36.824457] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:05:33.200 [2024-07-24 19:59:36.824634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1921045 ] 00:05:33.200 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.200 [2024-07-24 19:59:36.962839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.459 [2024-07-24 19:59:37.166474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.028 19:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.028 19:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:34.028 19:59:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.028 19:59:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:34.028 19:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:34.028 19:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:34.028 19:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.028 19:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:34.028 19:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.028 19:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:34.028 19:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.028 19:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:34.028 19:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.028 19:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:34.028 19:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:34.028 [2024-07-24 19:59:37.671327] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:05:34.028 [2024-07-24 19:59:37.671450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1921054 ] 00:05:34.028 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.028 [2024-07-24 19:59:37.754284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.287 [2024-07-24 19:59:37.902367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.287 [2024-07-24 19:59:37.902512] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:34.287 [2024-07-24 19:59:37.902539] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:34.287 [2024-07-24 19:59:37.902556] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:34.287 19:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:34.287 19:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:34.287 19:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:34.287 19:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:34.287 19:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:34.287 19:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:34.287 19:59:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:34.287 19:59:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1921045 00:05:34.287 19:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1921045 ']' 00:05:34.287 19:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1921045 00:05:34.287 19:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:34.546 19:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.546 19:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1921045 00:05:34.546 19:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:34.546 19:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:34.546 19:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1921045' 00:05:34.546 killing process with pid 1921045 00:05:34.546 19:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1921045 00:05:34.546 19:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1921045 00:05:35.114 00:05:35.114 real 0m2.033s 00:05:35.114 user 0m2.309s 00:05:35.114 sys 0m0.757s 00:05:35.114 19:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.114 19:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:35.114 ************************************ 00:05:35.114 END TEST exit_on_failed_rpc_init 00:05:35.114 ************************************ 00:05:35.114 19:59:38 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:35.114 00:05:35.114 real 0m15.981s 00:05:35.114 user 0m15.097s 00:05:35.114 sys 0m2.745s 00:05:35.114 19:59:38 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.114 19:59:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.114 ************************************ 00:05:35.114 END TEST skip_rpc 00:05:35.114 ************************************ 00:05:35.114 19:59:38 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:35.114 19:59:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.114 19:59:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.114 19:59:38 -- common/autotest_common.sh@10 -- # set +x 00:05:35.114 ************************************ 00:05:35.114 START TEST rpc_client 00:05:35.114 ************************************ 00:05:35.114 19:59:38 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:35.379 * Looking for test storage... 00:05:35.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:35.379 19:59:38 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:35.379 OK 00:05:35.379 19:59:38 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:35.379 00:05:35.379 real 0m0.117s 00:05:35.379 user 0m0.057s 00:05:35.379 sys 0m0.067s 00:05:35.379 19:59:38 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.379 19:59:38 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:35.379 ************************************ 00:05:35.379 END TEST rpc_client 00:05:35.379 ************************************ 00:05:35.379 19:59:39 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:35.379 19:59:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.379 19:59:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.379 19:59:39 -- common/autotest_common.sh@10 -- # set +x 00:05:35.379 ************************************ 00:05:35.379 START TEST json_config 00:05:35.379 ************************************ 00:05:35.379 19:59:39 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:35.379 19:59:39 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:35.379 19:59:39 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.379 19:59:39 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.379 19:59:39 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.379 19:59:39 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.379 19:59:39 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.379 19:59:39 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.379 19:59:39 json_config -- paths/export.sh@5 -- # export PATH 00:05:35.379 19:59:39 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@47 -- # : 0 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:35.379 19:59:39 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:35.379 19:59:39 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:35.379 19:59:39 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:35.379 19:59:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:35.379 19:59:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:35.379 19:59:39 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:35.379 19:59:39 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:35.379 19:59:39 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:35.379 19:59:39 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:35.379 19:59:39 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:35.379 19:59:39 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:35.379 19:59:39 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:35.379 19:59:39 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:35.379 19:59:39 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:35.379 19:59:39 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:35.380 19:59:39 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:35.380 19:59:39 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:35.380 INFO: JSON configuration test init 00:05:35.380 19:59:39 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:35.380 19:59:39 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:35.380 19:59:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:35.380 19:59:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.380 19:59:39 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:35.380 19:59:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:35.380 19:59:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.380 19:59:39 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:35.380 19:59:39 json_config -- json_config/common.sh@9 -- # local app=target 00:05:35.380 19:59:39 json_config -- json_config/common.sh@10 -- # shift 00:05:35.380 19:59:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:35.380 19:59:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:35.380 19:59:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:35.380 19:59:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.380 19:59:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.380 19:59:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1921421 00:05:35.380 19:59:39 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:35.380 19:59:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:35.380 Waiting for target to run... 00:05:35.380 19:59:39 json_config -- json_config/common.sh@25 -- # waitforlisten 1921421 /var/tmp/spdk_tgt.sock 00:05:35.380 19:59:39 json_config -- common/autotest_common.sh@831 -- # '[' -z 1921421 ']' 00:05:35.380 19:59:39 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:35.380 19:59:39 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.380 19:59:39 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:35.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:35.380 19:59:39 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.380 19:59:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.639 [2024-07-24 19:59:39.171048] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:05:35.639 [2024-07-24 19:59:39.171148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1921421 ] 00:05:35.639 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.206 [2024-07-24 19:59:39.770328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.206 [2024-07-24 19:59:39.959713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.774 19:59:40 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.774 19:59:40 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:36.774 19:59:40 json_config -- json_config/common.sh@26 -- # echo '' 00:05:36.774 00:05:36.774 19:59:40 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:36.774 19:59:40 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:36.774 19:59:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:36.774 19:59:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.774 19:59:40 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:36.774 19:59:40 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:36.774 19:59:40 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:36.774 19:59:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.774 19:59:40 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:36.774 19:59:40 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:36.774 19:59:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:40.958 19:59:43 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:40.958 19:59:43 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:40.958 19:59:43 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:40.958 19:59:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.958 19:59:43 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:40.958 19:59:43 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:40.958 19:59:43 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:40.958 19:59:43 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:40.958 19:59:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:40.958 19:59:43 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:40.958 19:59:44 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:40.958 19:59:44 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:40.958 19:59:44 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:40.958 19:59:44 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:40.958 19:59:44 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:40.958 19:59:44 json_config -- json_config/json_config.sh@51 -- # sort 00:05:40.958 19:59:44 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:40.958 19:59:44 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:40.958 19:59:44 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:40.958 19:59:44 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:40.958 19:59:44 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:40.959 19:59:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.959 19:59:44 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:40.959 19:59:44 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:40.959 19:59:44 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:40.959 19:59:44 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:40.959 19:59:44 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:40.959 19:59:44 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:40.959 19:59:44 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:40.959 19:59:44 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:40.959 19:59:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.959 19:59:44 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:40.959 19:59:44 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:40.959 19:59:44 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:40.959 19:59:44 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:40.959 19:59:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:41.217 MallocForNvmf0 00:05:41.217 19:59:44 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:41.217 19:59:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:41.476 MallocForNvmf1 00:05:41.476 19:59:45 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:41.477 19:59:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:41.735 [2024-07-24 19:59:45.296681] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:41.735 19:59:45 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:41.735 19:59:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:41.995 19:59:45 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:41.995 19:59:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:42.254 19:59:45 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:42.254 19:59:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:42.513 19:59:46 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:42.513 19:59:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:42.772 [2024-07-24 19:59:46.485327] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:42.772 19:59:46 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:42.772 19:59:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:42.772 19:59:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.772 19:59:46 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:42.772 19:59:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:42.772 19:59:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.031 19:59:46 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:43.031 19:59:46 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:43.031 19:59:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:43.290 MallocBdevForConfigChangeCheck 00:05:43.290 19:59:46 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:43.290 19:59:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:43.290 19:59:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.290 19:59:46 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:43.290 19:59:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:43.858 19:59:47 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:43.858 INFO: shutting down applications... 00:05:43.858 19:59:47 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:43.858 19:59:47 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:43.858 19:59:47 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:43.858 19:59:47 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:45.763 Calling clear_iscsi_subsystem 00:05:45.763 Calling clear_nvmf_subsystem 00:05:45.763 Calling clear_nbd_subsystem 00:05:45.763 Calling clear_ublk_subsystem 00:05:45.763 Calling clear_vhost_blk_subsystem 00:05:45.763 Calling clear_vhost_scsi_subsystem 00:05:45.763 Calling clear_bdev_subsystem 00:05:45.763 19:59:49 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:45.763 19:59:49 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:45.763 19:59:49 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:45.763 19:59:49 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:45.763 19:59:49 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:45.763 19:59:49 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:46.331 19:59:49 json_config -- json_config/json_config.sh@349 -- # break 00:05:46.331 19:59:49 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:46.331 19:59:49 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:46.331 19:59:49 json_config -- json_config/common.sh@31 -- # local app=target 00:05:46.331 19:59:49 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:46.331 19:59:49 json_config -- json_config/common.sh@35 -- # [[ -n 1921421 ]] 00:05:46.331 19:59:49 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1921421 00:05:46.331 19:59:49 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:46.331 19:59:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.331 19:59:49 json_config -- json_config/common.sh@41 -- # kill -0 1921421 00:05:46.331 19:59:49 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:46.910 19:59:50 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:46.910 19:59:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.910 19:59:50 json_config -- json_config/common.sh@41 -- # kill -0 1921421 00:05:46.910 19:59:50 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:46.910 19:59:50 json_config -- json_config/common.sh@43 -- # break 00:05:46.910 19:59:50 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:46.910 19:59:50 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:46.910 SPDK target shutdown done 00:05:46.910 19:59:50 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:46.910 INFO: relaunching applications... 00:05:46.910 19:59:50 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.910 19:59:50 json_config -- json_config/common.sh@9 -- # local app=target 00:05:46.910 19:59:50 json_config -- json_config/common.sh@10 -- # shift 00:05:46.910 19:59:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:46.910 19:59:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:46.910 19:59:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:46.910 19:59:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.910 19:59:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.910 19:59:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1922822 00:05:46.910 19:59:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:46.910 Waiting for target to run... 00:05:46.910 19:59:50 json_config -- json_config/common.sh@25 -- # waitforlisten 1922822 /var/tmp/spdk_tgt.sock 00:05:46.910 19:59:50 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.910 19:59:50 json_config -- common/autotest_common.sh@831 -- # '[' -z 1922822 ']' 00:05:46.910 19:59:50 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.910 19:59:50 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.910 19:59:50 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.910 19:59:50 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.910 19:59:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.910 [2024-07-24 19:59:50.558074] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:05:46.910 [2024-07-24 19:59:50.558186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1922822 ] 00:05:46.910 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.513 [2024-07-24 19:59:50.991874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.513 [2024-07-24 19:59:51.159294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.798 [2024-07-24 19:59:54.278669] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:50.798 [2024-07-24 19:59:54.311478] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:50.799 19:59:54 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.799 19:59:54 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:50.799 19:59:54 json_config -- json_config/common.sh@26 -- # echo '' 00:05:50.799 00:05:50.799 19:59:54 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:50.799 19:59:54 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:50.799 INFO: Checking if target configuration is the same... 00:05:50.799 19:59:54 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.799 19:59:54 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:50.799 19:59:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.799 + '[' 2 -ne 2 ']' 00:05:50.799 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:50.799 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:50.799 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:50.799 +++ basename /dev/fd/62 00:05:50.799 ++ mktemp /tmp/62.XXX 00:05:50.799 + tmp_file_1=/tmp/62.81W 00:05:50.799 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.799 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:50.799 + tmp_file_2=/tmp/spdk_tgt_config.json.3GR 00:05:50.799 + ret=0 00:05:50.799 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:51.365 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:51.624 + diff -u /tmp/62.81W /tmp/spdk_tgt_config.json.3GR 00:05:51.624 + echo 'INFO: JSON config files are the same' 00:05:51.624 INFO: JSON config files are the same 00:05:51.624 + rm /tmp/62.81W /tmp/spdk_tgt_config.json.3GR 00:05:51.624 + exit 0 00:05:51.624 19:59:55 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:51.624 19:59:55 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:51.624 INFO: changing configuration and checking if this can be detected... 00:05:51.624 19:59:55 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:51.624 19:59:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:51.882 19:59:55 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:51.882 19:59:55 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:51.882 19:59:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:51.882 + '[' 2 -ne 2 ']' 00:05:51.882 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:51.883 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:51.883 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:51.883 +++ basename /dev/fd/62 00:05:51.883 ++ mktemp /tmp/62.XXX 00:05:51.883 + tmp_file_1=/tmp/62.Wcx 00:05:51.883 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:51.883 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:51.883 + tmp_file_2=/tmp/spdk_tgt_config.json.U2X 00:05:51.883 + ret=0 00:05:51.883 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:52.818 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:52.818 + diff -u /tmp/62.Wcx /tmp/spdk_tgt_config.json.U2X 00:05:52.818 + ret=1 00:05:52.818 + echo '=== Start of file: /tmp/62.Wcx ===' 00:05:52.818 + cat /tmp/62.Wcx 00:05:52.818 + echo '=== End of file: /tmp/62.Wcx ===' 00:05:52.818 + echo '' 00:05:52.818 + echo '=== Start of file: /tmp/spdk_tgt_config.json.U2X ===' 00:05:52.818 + cat /tmp/spdk_tgt_config.json.U2X 00:05:52.818 + echo '=== End of file: /tmp/spdk_tgt_config.json.U2X ===' 00:05:52.818 + echo '' 00:05:52.818 + rm /tmp/62.Wcx /tmp/spdk_tgt_config.json.U2X 00:05:52.818 + exit 1 00:05:52.818 19:59:56 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:52.818 INFO: configuration change detected. 00:05:52.818 19:59:56 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:52.818 19:59:56 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:52.818 19:59:56 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:52.818 19:59:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.818 19:59:56 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:52.818 19:59:56 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:52.818 19:59:56 json_config -- json_config/json_config.sh@321 -- # [[ -n 1922822 ]] 00:05:52.818 19:59:56 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:52.818 19:59:56 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:52.818 19:59:56 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:52.818 19:59:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.818 19:59:56 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:52.818 19:59:56 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:52.818 19:59:56 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:52.818 19:59:56 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:52.818 19:59:56 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:52.818 19:59:56 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:52.818 19:59:56 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:52.818 19:59:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.818 19:59:56 json_config -- json_config/json_config.sh@327 -- # killprocess 1922822 00:05:52.819 19:59:56 json_config -- common/autotest_common.sh@950 -- # '[' -z 1922822 ']' 00:05:52.819 19:59:56 json_config -- common/autotest_common.sh@954 -- # kill -0 1922822 00:05:52.819 19:59:56 json_config -- common/autotest_common.sh@955 -- # uname 00:05:52.819 19:59:56 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.819 19:59:56 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1922822 00:05:52.819 19:59:56 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.819 19:59:56 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.819 19:59:56 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1922822' 00:05:52.819 killing process with pid 1922822 00:05:52.819 19:59:56 json_config -- common/autotest_common.sh@969 -- # kill 1922822 00:05:52.819 19:59:56 json_config -- common/autotest_common.sh@974 -- # wait 1922822 00:05:54.721 19:59:58 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.721 19:59:58 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:54.721 19:59:58 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:54.721 19:59:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.721 19:59:58 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:54.721 19:59:58 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:54.721 INFO: Success 00:05:54.721 00:05:54.721 real 0m19.315s 00:05:54.721 user 0m23.861s 00:05:54.721 sys 0m2.875s 00:05:54.722 19:59:58 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.722 19:59:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.722 ************************************ 00:05:54.722 END TEST json_config 00:05:54.722 ************************************ 00:05:54.722 19:59:58 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:54.722 19:59:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.722 19:59:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.722 19:59:58 -- common/autotest_common.sh@10 -- # set +x 00:05:54.722 ************************************ 00:05:54.722 START TEST json_config_extra_key 00:05:54.722 ************************************ 00:05:54.722 19:59:58 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:54.722 19:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:54.722 19:59:58 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:54.722 19:59:58 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:54.722 19:59:58 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:54.722 19:59:58 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.722 19:59:58 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.722 19:59:58 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.722 19:59:58 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:54.722 19:59:58 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:54.722 19:59:58 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:54.981 19:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:54.981 19:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:54.981 19:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:54.981 19:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:54.981 19:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:54.981 19:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:54.981 19:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:54.981 19:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:54.981 19:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:54.981 19:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:54.981 19:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:54.981 INFO: launching applications... 00:05:54.981 19:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:54.981 19:59:58 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:54.981 19:59:58 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:54.981 19:59:58 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:54.981 19:59:58 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:54.981 19:59:58 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:54.981 19:59:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:54.981 19:59:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:54.981 19:59:58 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1923799 00:05:54.981 19:59:58 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:54.981 19:59:58 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:54.981 Waiting for target to run... 00:05:54.981 19:59:58 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1923799 /var/tmp/spdk_tgt.sock 00:05:54.981 19:59:58 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1923799 ']' 00:05:54.981 19:59:58 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:54.981 19:59:58 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.981 19:59:58 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:54.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:54.981 19:59:58 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.981 19:59:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:54.981 [2024-07-24 19:59:58.632172] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:05:54.981 [2024-07-24 19:59:58.632361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1923799 ] 00:05:54.981 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.549 [2024-07-24 19:59:59.137473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.549 [2024-07-24 19:59:59.303743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.116 19:59:59 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.116 19:59:59 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:56.116 19:59:59 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:56.116 00:05:56.116 19:59:59 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:56.116 INFO: shutting down applications... 00:05:56.116 19:59:59 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:56.116 19:59:59 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:56.116 19:59:59 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:56.116 19:59:59 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1923799 ]] 00:05:56.116 19:59:59 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1923799 00:05:56.116 19:59:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:56.116 19:59:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:56.116 19:59:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1923799 00:05:56.116 19:59:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:56.684 20:00:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:56.684 20:00:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:56.684 20:00:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1923799 00:05:56.684 20:00:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:57.253 20:00:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:57.253 20:00:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:57.253 20:00:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1923799 00:05:57.253 20:00:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:57.253 20:00:00 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:57.253 20:00:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:57.253 20:00:00 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:57.253 SPDK target shutdown done 00:05:57.253 20:00:00 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:57.253 Success 00:05:57.253 00:05:57.253 real 0m2.479s 00:05:57.253 user 0m2.258s 00:05:57.253 sys 0m0.676s 00:05:57.253 20:00:00 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.253 20:00:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:57.253 ************************************ 00:05:57.253 END TEST json_config_extra_key 00:05:57.253 ************************************ 00:05:57.253 20:00:00 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:57.253 20:00:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.253 20:00:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.253 20:00:00 -- common/autotest_common.sh@10 -- # set +x 00:05:57.253 ************************************ 00:05:57.253 START TEST alias_rpc 00:05:57.253 ************************************ 00:05:57.253 20:00:00 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:57.253 * Looking for test storage... 00:05:57.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:57.253 20:00:01 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:57.253 20:00:01 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1924252 00:05:57.253 20:00:01 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.253 20:00:01 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1924252 00:05:57.253 20:00:01 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1924252 ']' 00:05:57.253 20:00:01 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.253 20:00:01 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.253 20:00:01 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.253 20:00:01 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.253 20:00:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.511 [2024-07-24 20:00:01.102223] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:05:57.511 [2024-07-24 20:00:01.102343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1924252 ] 00:05:57.511 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.511 [2024-07-24 20:00:01.201509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.770 [2024-07-24 20:00:01.400419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.026 20:00:01 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.026 20:00:01 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:58.026 20:00:01 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:58.588 20:00:02 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1924252 00:05:58.588 20:00:02 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1924252 ']' 00:05:58.588 20:00:02 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1924252 00:05:58.588 20:00:02 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:58.588 20:00:02 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.588 20:00:02 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1924252 00:05:58.588 20:00:02 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.588 20:00:02 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.588 20:00:02 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1924252' 00:05:58.588 killing process with pid 1924252 00:05:58.588 20:00:02 alias_rpc -- common/autotest_common.sh@969 -- # kill 1924252 00:05:58.588 20:00:02 alias_rpc -- common/autotest_common.sh@974 -- # wait 1924252 00:05:59.153 00:05:59.153 real 0m1.872s 00:05:59.153 user 0m2.005s 00:05:59.153 sys 0m0.649s 00:05:59.153 20:00:02 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.153 20:00:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.153 ************************************ 00:05:59.153 END TEST alias_rpc 00:05:59.153 ************************************ 00:05:59.153 20:00:02 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:59.153 20:00:02 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:59.153 20:00:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.153 20:00:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.153 20:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:59.153 ************************************ 00:05:59.153 START TEST spdkcli_tcp 00:05:59.153 ************************************ 00:05:59.153 20:00:02 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:59.413 * Looking for test storage... 00:05:59.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:59.413 20:00:02 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:59.413 20:00:02 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:59.413 20:00:02 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:59.413 20:00:02 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:59.413 20:00:02 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:59.413 20:00:02 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:59.413 20:00:02 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:59.413 20:00:02 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:59.413 20:00:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:59.413 20:00:02 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1924557 00:05:59.413 20:00:02 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:59.413 20:00:02 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1924557 00:05:59.413 20:00:02 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1924557 ']' 00:05:59.413 20:00:02 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.413 20:00:02 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.413 20:00:02 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.413 20:00:02 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.413 20:00:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:59.413 [2024-07-24 20:00:03.084066] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:05:59.413 [2024-07-24 20:00:03.084237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1924557 ] 00:05:59.413 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.672 [2024-07-24 20:00:03.208680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.672 [2024-07-24 20:00:03.379716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.672 [2024-07-24 20:00:03.379723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.930 20:00:03 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.930 20:00:03 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:59.930 20:00:03 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1924688 00:05:59.931 20:00:03 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:59.931 20:00:03 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:00.496 [ 00:06:00.496 "bdev_malloc_delete", 00:06:00.496 "bdev_malloc_create", 00:06:00.496 "bdev_null_resize", 00:06:00.496 "bdev_null_delete", 00:06:00.496 "bdev_null_create", 00:06:00.496 "bdev_nvme_cuse_unregister", 00:06:00.496 "bdev_nvme_cuse_register", 00:06:00.496 "bdev_opal_new_user", 00:06:00.496 "bdev_opal_set_lock_state", 00:06:00.496 "bdev_opal_delete", 00:06:00.496 "bdev_opal_get_info", 00:06:00.496 "bdev_opal_create", 00:06:00.496 "bdev_nvme_opal_revert", 00:06:00.496 "bdev_nvme_opal_init", 00:06:00.496 "bdev_nvme_send_cmd", 00:06:00.496 "bdev_nvme_get_path_iostat", 00:06:00.496 "bdev_nvme_get_mdns_discovery_info", 00:06:00.496 "bdev_nvme_stop_mdns_discovery", 00:06:00.496 "bdev_nvme_start_mdns_discovery", 00:06:00.496 "bdev_nvme_set_multipath_policy", 00:06:00.496 "bdev_nvme_set_preferred_path", 00:06:00.496 "bdev_nvme_get_io_paths", 00:06:00.496 "bdev_nvme_remove_error_injection", 00:06:00.496 "bdev_nvme_add_error_injection", 00:06:00.496 "bdev_nvme_get_discovery_info", 00:06:00.496 "bdev_nvme_stop_discovery", 00:06:00.496 "bdev_nvme_start_discovery", 00:06:00.496 "bdev_nvme_get_controller_health_info", 00:06:00.496 "bdev_nvme_disable_controller", 00:06:00.496 "bdev_nvme_enable_controller", 00:06:00.496 "bdev_nvme_reset_controller", 00:06:00.496 "bdev_nvme_get_transport_statistics", 00:06:00.496 "bdev_nvme_apply_firmware", 00:06:00.496 "bdev_nvme_detach_controller", 00:06:00.496 "bdev_nvme_get_controllers", 00:06:00.496 "bdev_nvme_attach_controller", 00:06:00.496 "bdev_nvme_set_hotplug", 00:06:00.496 "bdev_nvme_set_options", 00:06:00.496 "bdev_passthru_delete", 00:06:00.496 "bdev_passthru_create", 00:06:00.496 "bdev_lvol_set_parent_bdev", 00:06:00.496 "bdev_lvol_set_parent", 00:06:00.496 "bdev_lvol_check_shallow_copy", 00:06:00.496 "bdev_lvol_start_shallow_copy", 00:06:00.496 "bdev_lvol_grow_lvstore", 00:06:00.496 "bdev_lvol_get_lvols", 00:06:00.496 "bdev_lvol_get_lvstores", 00:06:00.496 "bdev_lvol_delete", 00:06:00.496 "bdev_lvol_set_read_only", 00:06:00.496 "bdev_lvol_resize", 00:06:00.496 "bdev_lvol_decouple_parent", 00:06:00.496 "bdev_lvol_inflate", 00:06:00.496 "bdev_lvol_rename", 00:06:00.496 "bdev_lvol_clone_bdev", 00:06:00.496 "bdev_lvol_clone", 00:06:00.496 "bdev_lvol_snapshot", 00:06:00.496 "bdev_lvol_create", 00:06:00.496 "bdev_lvol_delete_lvstore", 00:06:00.496 "bdev_lvol_rename_lvstore", 00:06:00.496 "bdev_lvol_create_lvstore", 00:06:00.496 "bdev_raid_set_options", 00:06:00.496 "bdev_raid_remove_base_bdev", 00:06:00.496 "bdev_raid_add_base_bdev", 00:06:00.496 "bdev_raid_delete", 00:06:00.496 "bdev_raid_create", 00:06:00.496 "bdev_raid_get_bdevs", 00:06:00.496 "bdev_error_inject_error", 00:06:00.496 "bdev_error_delete", 00:06:00.496 "bdev_error_create", 00:06:00.496 "bdev_split_delete", 00:06:00.496 "bdev_split_create", 00:06:00.496 "bdev_delay_delete", 00:06:00.496 "bdev_delay_create", 00:06:00.496 "bdev_delay_update_latency", 00:06:00.496 "bdev_zone_block_delete", 00:06:00.496 "bdev_zone_block_create", 00:06:00.496 "blobfs_create", 00:06:00.496 "blobfs_detect", 00:06:00.496 "blobfs_set_cache_size", 00:06:00.496 "bdev_aio_delete", 00:06:00.496 "bdev_aio_rescan", 00:06:00.496 "bdev_aio_create", 00:06:00.496 "bdev_ftl_set_property", 00:06:00.496 "bdev_ftl_get_properties", 00:06:00.496 "bdev_ftl_get_stats", 00:06:00.496 "bdev_ftl_unmap", 00:06:00.496 "bdev_ftl_unload", 00:06:00.496 "bdev_ftl_delete", 00:06:00.496 "bdev_ftl_load", 00:06:00.496 "bdev_ftl_create", 00:06:00.496 "bdev_virtio_attach_controller", 00:06:00.496 "bdev_virtio_scsi_get_devices", 00:06:00.496 "bdev_virtio_detach_controller", 00:06:00.496 "bdev_virtio_blk_set_hotplug", 00:06:00.496 "bdev_iscsi_delete", 00:06:00.496 "bdev_iscsi_create", 00:06:00.496 "bdev_iscsi_set_options", 00:06:00.496 "accel_error_inject_error", 00:06:00.496 "ioat_scan_accel_module", 00:06:00.496 "dsa_scan_accel_module", 00:06:00.496 "iaa_scan_accel_module", 00:06:00.496 "vfu_virtio_create_scsi_endpoint", 00:06:00.496 "vfu_virtio_scsi_remove_target", 00:06:00.496 "vfu_virtio_scsi_add_target", 00:06:00.496 "vfu_virtio_create_blk_endpoint", 00:06:00.496 "vfu_virtio_delete_endpoint", 00:06:00.496 "keyring_file_remove_key", 00:06:00.496 "keyring_file_add_key", 00:06:00.496 "keyring_linux_set_options", 00:06:00.496 "iscsi_get_histogram", 00:06:00.496 "iscsi_enable_histogram", 00:06:00.496 "iscsi_set_options", 00:06:00.496 "iscsi_get_auth_groups", 00:06:00.496 "iscsi_auth_group_remove_secret", 00:06:00.496 "iscsi_auth_group_add_secret", 00:06:00.496 "iscsi_delete_auth_group", 00:06:00.496 "iscsi_create_auth_group", 00:06:00.496 "iscsi_set_discovery_auth", 00:06:00.496 "iscsi_get_options", 00:06:00.496 "iscsi_target_node_request_logout", 00:06:00.496 "iscsi_target_node_set_redirect", 00:06:00.496 "iscsi_target_node_set_auth", 00:06:00.496 "iscsi_target_node_add_lun", 00:06:00.496 "iscsi_get_stats", 00:06:00.496 "iscsi_get_connections", 00:06:00.496 "iscsi_portal_group_set_auth", 00:06:00.496 "iscsi_start_portal_group", 00:06:00.496 "iscsi_delete_portal_group", 00:06:00.496 "iscsi_create_portal_group", 00:06:00.496 "iscsi_get_portal_groups", 00:06:00.496 "iscsi_delete_target_node", 00:06:00.496 "iscsi_target_node_remove_pg_ig_maps", 00:06:00.496 "iscsi_target_node_add_pg_ig_maps", 00:06:00.496 "iscsi_create_target_node", 00:06:00.496 "iscsi_get_target_nodes", 00:06:00.496 "iscsi_delete_initiator_group", 00:06:00.496 "iscsi_initiator_group_remove_initiators", 00:06:00.496 "iscsi_initiator_group_add_initiators", 00:06:00.496 "iscsi_create_initiator_group", 00:06:00.496 "iscsi_get_initiator_groups", 00:06:00.496 "nvmf_set_crdt", 00:06:00.496 "nvmf_set_config", 00:06:00.496 "nvmf_set_max_subsystems", 00:06:00.496 "nvmf_stop_mdns_prr", 00:06:00.496 "nvmf_publish_mdns_prr", 00:06:00.496 "nvmf_subsystem_get_listeners", 00:06:00.496 "nvmf_subsystem_get_qpairs", 00:06:00.496 "nvmf_subsystem_get_controllers", 00:06:00.496 "nvmf_get_stats", 00:06:00.496 "nvmf_get_transports", 00:06:00.496 "nvmf_create_transport", 00:06:00.496 "nvmf_get_targets", 00:06:00.496 "nvmf_delete_target", 00:06:00.496 "nvmf_create_target", 00:06:00.496 "nvmf_subsystem_allow_any_host", 00:06:00.496 "nvmf_subsystem_remove_host", 00:06:00.496 "nvmf_subsystem_add_host", 00:06:00.496 "nvmf_ns_remove_host", 00:06:00.496 "nvmf_ns_add_host", 00:06:00.496 "nvmf_subsystem_remove_ns", 00:06:00.496 "nvmf_subsystem_add_ns", 00:06:00.496 "nvmf_subsystem_listener_set_ana_state", 00:06:00.496 "nvmf_discovery_get_referrals", 00:06:00.496 "nvmf_discovery_remove_referral", 00:06:00.496 "nvmf_discovery_add_referral", 00:06:00.496 "nvmf_subsystem_remove_listener", 00:06:00.496 "nvmf_subsystem_add_listener", 00:06:00.496 "nvmf_delete_subsystem", 00:06:00.496 "nvmf_create_subsystem", 00:06:00.496 "nvmf_get_subsystems", 00:06:00.496 "env_dpdk_get_mem_stats", 00:06:00.496 "nbd_get_disks", 00:06:00.496 "nbd_stop_disk", 00:06:00.496 "nbd_start_disk", 00:06:00.496 "ublk_recover_disk", 00:06:00.496 "ublk_get_disks", 00:06:00.496 "ublk_stop_disk", 00:06:00.496 "ublk_start_disk", 00:06:00.496 "ublk_destroy_target", 00:06:00.496 "ublk_create_target", 00:06:00.496 "virtio_blk_create_transport", 00:06:00.496 "virtio_blk_get_transports", 00:06:00.496 "vhost_controller_set_coalescing", 00:06:00.496 "vhost_get_controllers", 00:06:00.496 "vhost_delete_controller", 00:06:00.496 "vhost_create_blk_controller", 00:06:00.496 "vhost_scsi_controller_remove_target", 00:06:00.496 "vhost_scsi_controller_add_target", 00:06:00.496 "vhost_start_scsi_controller", 00:06:00.496 "vhost_create_scsi_controller", 00:06:00.496 "thread_set_cpumask", 00:06:00.496 "framework_get_governor", 00:06:00.496 "framework_get_scheduler", 00:06:00.496 "framework_set_scheduler", 00:06:00.496 "framework_get_reactors", 00:06:00.496 "thread_get_io_channels", 00:06:00.496 "thread_get_pollers", 00:06:00.496 "thread_get_stats", 00:06:00.496 "framework_monitor_context_switch", 00:06:00.496 "spdk_kill_instance", 00:06:00.496 "log_enable_timestamps", 00:06:00.496 "log_get_flags", 00:06:00.496 "log_clear_flag", 00:06:00.496 "log_set_flag", 00:06:00.497 "log_get_level", 00:06:00.497 "log_set_level", 00:06:00.497 "log_get_print_level", 00:06:00.497 "log_set_print_level", 00:06:00.497 "framework_enable_cpumask_locks", 00:06:00.497 "framework_disable_cpumask_locks", 00:06:00.497 "framework_wait_init", 00:06:00.497 "framework_start_init", 00:06:00.497 "scsi_get_devices", 00:06:00.497 "bdev_get_histogram", 00:06:00.497 "bdev_enable_histogram", 00:06:00.497 "bdev_set_qos_limit", 00:06:00.497 "bdev_set_qd_sampling_period", 00:06:00.497 "bdev_get_bdevs", 00:06:00.497 "bdev_reset_iostat", 00:06:00.497 "bdev_get_iostat", 00:06:00.497 "bdev_examine", 00:06:00.497 "bdev_wait_for_examine", 00:06:00.497 "bdev_set_options", 00:06:00.497 "notify_get_notifications", 00:06:00.497 "notify_get_types", 00:06:00.497 "accel_get_stats", 00:06:00.497 "accel_set_options", 00:06:00.497 "accel_set_driver", 00:06:00.497 "accel_crypto_key_destroy", 00:06:00.497 "accel_crypto_keys_get", 00:06:00.497 "accel_crypto_key_create", 00:06:00.497 "accel_assign_opc", 00:06:00.497 "accel_get_module_info", 00:06:00.497 "accel_get_opc_assignments", 00:06:00.497 "vmd_rescan", 00:06:00.497 "vmd_remove_device", 00:06:00.497 "vmd_enable", 00:06:00.497 "sock_get_default_impl", 00:06:00.497 "sock_set_default_impl", 00:06:00.497 "sock_impl_set_options", 00:06:00.497 "sock_impl_get_options", 00:06:00.497 "iobuf_get_stats", 00:06:00.497 "iobuf_set_options", 00:06:00.497 "keyring_get_keys", 00:06:00.497 "framework_get_pci_devices", 00:06:00.497 "framework_get_config", 00:06:00.497 "framework_get_subsystems", 00:06:00.497 "vfu_tgt_set_base_path", 00:06:00.497 "trace_get_info", 00:06:00.497 "trace_get_tpoint_group_mask", 00:06:00.497 "trace_disable_tpoint_group", 00:06:00.497 "trace_enable_tpoint_group", 00:06:00.497 "trace_clear_tpoint_mask", 00:06:00.497 "trace_set_tpoint_mask", 00:06:00.497 "spdk_get_version", 00:06:00.497 "rpc_get_methods" 00:06:00.497 ] 00:06:00.497 20:00:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:00.497 20:00:04 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:00.497 20:00:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.758 20:00:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:00.758 20:00:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1924557 00:06:00.758 20:00:04 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1924557 ']' 00:06:00.758 20:00:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1924557 00:06:00.758 20:00:04 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:00.758 20:00:04 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.758 20:00:04 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1924557 00:06:00.758 20:00:04 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.758 20:00:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.758 20:00:04 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1924557' 00:06:00.758 killing process with pid 1924557 00:06:00.758 20:00:04 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1924557 00:06:00.758 20:00:04 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1924557 00:06:01.326 00:06:01.326 real 0m2.024s 00:06:01.326 user 0m3.716s 00:06:01.326 sys 0m0.663s 00:06:01.326 20:00:04 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.326 20:00:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.326 ************************************ 00:06:01.326 END TEST spdkcli_tcp 00:06:01.326 ************************************ 00:06:01.326 20:00:04 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.326 20:00:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.326 20:00:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.326 20:00:04 -- common/autotest_common.sh@10 -- # set +x 00:06:01.326 ************************************ 00:06:01.326 START TEST dpdk_mem_utility 00:06:01.326 ************************************ 00:06:01.326 20:00:04 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.326 * Looking for test storage... 00:06:01.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:01.326 20:00:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:01.326 20:00:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1924890 00:06:01.326 20:00:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.326 20:00:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1924890 00:06:01.326 20:00:05 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1924890 ']' 00:06:01.326 20:00:05 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.326 20:00:05 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.326 20:00:05 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.326 20:00:05 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.326 20:00:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.585 [2024-07-24 20:00:05.160922] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:06:01.585 [2024-07-24 20:00:05.161097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1924890 ] 00:06:01.585 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.585 [2024-07-24 20:00:05.289691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.844 [2024-07-24 20:00:05.489756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.411 20:00:05 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.411 20:00:05 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:02.411 20:00:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:02.411 20:00:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:02.411 20:00:05 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.411 20:00:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:02.411 { 00:06:02.411 "filename": "/tmp/spdk_mem_dump.txt" 00:06:02.411 } 00:06:02.411 20:00:05 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.411 20:00:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:02.411 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:02.411 1 heaps totaling size 814.000000 MiB 00:06:02.411 size: 814.000000 MiB heap id: 0 00:06:02.411 end heaps---------- 00:06:02.411 8 mempools totaling size 598.116089 MiB 00:06:02.411 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:02.411 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:02.411 size: 84.521057 MiB name: bdev_io_1924890 00:06:02.411 size: 51.011292 MiB name: evtpool_1924890 00:06:02.411 size: 50.003479 MiB name: msgpool_1924890 00:06:02.411 size: 21.763794 MiB name: PDU_Pool 00:06:02.411 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:02.411 size: 0.026123 MiB name: Session_Pool 00:06:02.411 end mempools------- 00:06:02.411 6 memzones totaling size 4.142822 MiB 00:06:02.411 size: 1.000366 MiB name: RG_ring_0_1924890 00:06:02.411 size: 1.000366 MiB name: RG_ring_1_1924890 00:06:02.411 size: 1.000366 MiB name: RG_ring_4_1924890 00:06:02.411 size: 1.000366 MiB name: RG_ring_5_1924890 00:06:02.411 size: 0.125366 MiB name: RG_ring_2_1924890 00:06:02.411 size: 0.015991 MiB name: RG_ring_3_1924890 00:06:02.411 end memzones------- 00:06:02.411 20:00:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:02.411 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:02.411 list of free elements. size: 12.519348 MiB 00:06:02.411 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:02.411 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:02.411 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:02.411 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:02.411 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:02.411 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:02.411 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:02.411 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:02.411 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:02.411 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:02.411 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:02.411 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:02.411 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:02.411 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:02.411 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:02.411 list of standard malloc elements. size: 199.218079 MiB 00:06:02.411 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:02.411 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:02.412 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:02.412 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:02.412 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:02.412 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:02.412 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:02.412 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:02.412 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:02.412 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:02.412 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:02.412 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:02.412 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:02.412 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:02.412 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:02.412 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:02.412 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:02.412 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:02.412 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:02.412 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:02.412 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:02.412 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:02.412 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:02.412 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:02.412 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:02.412 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:02.412 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:02.412 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:02.412 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:02.412 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:02.412 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:02.412 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:02.412 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:02.412 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:02.412 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:02.412 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:02.412 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:02.412 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:02.412 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:02.412 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:02.412 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:02.412 list of memzone associated elements. size: 602.262573 MiB 00:06:02.412 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:02.412 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:02.412 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:02.412 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:02.412 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:02.412 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1924890_0 00:06:02.412 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:02.412 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1924890_0 00:06:02.412 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:02.412 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1924890_0 00:06:02.412 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:02.412 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:02.412 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:02.412 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:02.412 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:02.412 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1924890 00:06:02.412 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:02.412 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1924890 00:06:02.412 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:02.412 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1924890 00:06:02.412 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:02.412 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:02.412 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:02.412 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:02.412 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:02.412 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:02.412 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:02.412 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:02.412 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:02.412 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1924890 00:06:02.412 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:02.412 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1924890 00:06:02.412 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:02.412 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1924890 00:06:02.412 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:02.412 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1924890 00:06:02.412 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:02.412 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1924890 00:06:02.412 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:02.412 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:02.412 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:02.412 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:02.412 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:02.412 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:02.412 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:02.412 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1924890 00:06:02.412 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:02.412 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:02.412 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:02.412 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:02.412 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:02.412 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1924890 00:06:02.412 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:02.412 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:02.412 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:02.412 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1924890 00:06:02.412 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:02.412 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1924890 00:06:02.412 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:02.412 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:02.412 20:00:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:02.412 20:00:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1924890 00:06:02.412 20:00:06 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1924890 ']' 00:06:02.412 20:00:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1924890 00:06:02.412 20:00:06 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:02.412 20:00:06 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.412 20:00:06 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1924890 00:06:02.412 20:00:06 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.412 20:00:06 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.412 20:00:06 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1924890' 00:06:02.412 killing process with pid 1924890 00:06:02.412 20:00:06 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1924890 00:06:02.412 20:00:06 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1924890 00:06:03.005 00:06:03.005 real 0m1.798s 00:06:03.005 user 0m1.867s 00:06:03.005 sys 0m0.698s 00:06:03.005 20:00:06 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.005 20:00:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.005 ************************************ 00:06:03.005 END TEST dpdk_mem_utility 00:06:03.005 ************************************ 00:06:03.264 20:00:06 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:03.264 20:00:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.264 20:00:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.264 20:00:06 -- common/autotest_common.sh@10 -- # set +x 00:06:03.264 ************************************ 00:06:03.264 START TEST event 00:06:03.264 ************************************ 00:06:03.264 20:00:06 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:03.264 * Looking for test storage... 00:06:03.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:03.264 20:00:06 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:03.264 20:00:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:03.264 20:00:06 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:03.264 20:00:06 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:03.264 20:00:06 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.264 20:00:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.264 ************************************ 00:06:03.264 START TEST event_perf 00:06:03.264 ************************************ 00:06:03.264 20:00:06 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:03.264 Running I/O for 1 seconds...[2024-07-24 20:00:06.954526] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:06:03.264 [2024-07-24 20:00:06.954592] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1925206 ] 00:06:03.264 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.522 [2024-07-24 20:00:07.052562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.522 [2024-07-24 20:00:07.277131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.522 [2024-07-24 20:00:07.277195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.522 [2024-07-24 20:00:07.277251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.522 [2024-07-24 20:00:07.277255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.896 Running I/O for 1 seconds... 00:06:04.896 lcore 0: 165253 00:06:04.896 lcore 1: 165253 00:06:04.896 lcore 2: 165251 00:06:04.896 lcore 3: 165252 00:06:04.896 done. 00:06:04.896 00:06:04.896 real 0m1.528s 00:06:04.896 user 0m4.386s 00:06:04.896 sys 0m0.130s 00:06:04.896 20:00:08 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.896 20:00:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.896 ************************************ 00:06:04.896 END TEST event_perf 00:06:04.896 ************************************ 00:06:04.896 20:00:08 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:04.896 20:00:08 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:04.896 20:00:08 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.896 20:00:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.896 ************************************ 00:06:04.896 START TEST event_reactor 00:06:04.896 ************************************ 00:06:04.896 20:00:08 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:04.896 [2024-07-24 20:00:08.552237] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:06:04.896 [2024-07-24 20:00:08.552339] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1925616 ] 00:06:04.896 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.896 [2024-07-24 20:00:08.666970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.155 [2024-07-24 20:00:08.869524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.535 test_start 00:06:06.535 oneshot 00:06:06.535 tick 100 00:06:06.535 tick 100 00:06:06.535 tick 250 00:06:06.535 tick 100 00:06:06.535 tick 100 00:06:06.535 tick 100 00:06:06.535 tick 250 00:06:06.535 tick 500 00:06:06.535 tick 100 00:06:06.535 tick 100 00:06:06.535 tick 250 00:06:06.535 tick 100 00:06:06.535 tick 100 00:06:06.535 test_end 00:06:06.535 00:06:06.535 real 0m1.520s 00:06:06.535 user 0m1.368s 00:06:06.535 sys 0m0.141s 00:06:06.535 20:00:10 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.535 20:00:10 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:06.535 ************************************ 00:06:06.535 END TEST event_reactor 00:06:06.535 ************************************ 00:06:06.535 20:00:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:06.535 20:00:10 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:06.535 20:00:10 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.535 20:00:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.535 ************************************ 00:06:06.535 START TEST event_reactor_perf 00:06:06.535 ************************************ 00:06:06.535 20:00:10 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:06.535 [2024-07-24 20:00:10.144253] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:06:06.535 [2024-07-24 20:00:10.144329] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1926025 ] 00:06:06.535 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.535 [2024-07-24 20:00:10.234349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.794 [2024-07-24 20:00:10.420391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.183 test_start 00:06:08.183 test_end 00:06:08.183 Performance: 179791 events per second 00:06:08.183 00:06:08.183 real 0m1.447s 00:06:08.183 user 0m1.318s 00:06:08.183 sys 0m0.119s 00:06:08.183 20:00:11 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.183 20:00:11 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.183 ************************************ 00:06:08.183 END TEST event_reactor_perf 00:06:08.183 ************************************ 00:06:08.183 20:00:11 event -- event/event.sh@49 -- # uname -s 00:06:08.183 20:00:11 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:08.183 20:00:11 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:08.183 20:00:11 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.183 20:00:11 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.183 20:00:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.183 ************************************ 00:06:08.183 START TEST event_scheduler 00:06:08.183 ************************************ 00:06:08.183 20:00:11 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:08.183 * Looking for test storage... 00:06:08.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:08.183 20:00:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:08.183 20:00:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1926336 00:06:08.183 20:00:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:08.183 20:00:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.183 20:00:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1926336 00:06:08.183 20:00:11 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1926336 ']' 00:06:08.183 20:00:11 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.183 20:00:11 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.183 20:00:11 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.183 20:00:11 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.183 20:00:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.183 [2024-07-24 20:00:11.746745] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:06:08.183 [2024-07-24 20:00:11.746843] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1926336 ] 00:06:08.183 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.183 [2024-07-24 20:00:11.849931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:08.440 [2024-07-24 20:00:12.063643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.440 [2024-07-24 20:00:12.063744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.440 [2024-07-24 20:00:12.063803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.440 [2024-07-24 20:00:12.063808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.698 20:00:12 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.698 20:00:12 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:08.698 20:00:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:08.698 20:00:12 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.698 20:00:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.698 [2024-07-24 20:00:12.265331] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:08.698 [2024-07-24 20:00:12.265371] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:08.698 [2024-07-24 20:00:12.265394] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:08.698 [2024-07-24 20:00:12.265409] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:08.698 [2024-07-24 20:00:12.265423] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:08.698 20:00:12 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.698 20:00:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:08.698 20:00:12 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.698 20:00:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.698 [2024-07-24 20:00:12.440354] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:08.698 20:00:12 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.699 20:00:12 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:08.699 20:00:12 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.699 20:00:12 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.699 20:00:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.699 ************************************ 00:06:08.699 START TEST scheduler_create_thread 00:06:08.699 ************************************ 00:06:08.699 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:08.699 20:00:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:08.699 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.699 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.957 2 00:06:08.957 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.957 20:00:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:08.957 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.957 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.957 3 00:06:08.957 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.957 20:00:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:08.957 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.957 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.957 4 00:06:08.957 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.957 20:00:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:08.957 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.957 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.957 5 00:06:08.957 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.957 20:00:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:08.957 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.957 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.957 6 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.958 7 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.958 8 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.958 9 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.958 10 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.958 20:00:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.525 20:00:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.525 00:06:09.525 real 0m0.592s 00:06:09.525 user 0m0.014s 00:06:09.525 sys 0m0.005s 00:06:09.525 20:00:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.525 20:00:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.525 ************************************ 00:06:09.525 END TEST scheduler_create_thread 00:06:09.525 ************************************ 00:06:09.525 20:00:13 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:09.525 20:00:13 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1926336 00:06:09.525 20:00:13 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1926336 ']' 00:06:09.525 20:00:13 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1926336 00:06:09.525 20:00:13 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:09.525 20:00:13 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.525 20:00:13 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1926336 00:06:09.525 20:00:13 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:09.525 20:00:13 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:09.525 20:00:13 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1926336' 00:06:09.525 killing process with pid 1926336 00:06:09.525 20:00:13 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1926336 00:06:09.525 20:00:13 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1926336 00:06:09.782 [2024-07-24 20:00:13.545923] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:10.348 00:06:10.348 real 0m2.287s 00:06:10.348 user 0m3.420s 00:06:10.348 sys 0m0.522s 00:06:10.348 20:00:13 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.348 20:00:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.348 ************************************ 00:06:10.348 END TEST event_scheduler 00:06:10.348 ************************************ 00:06:10.348 20:00:13 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:10.348 20:00:13 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:10.348 20:00:13 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.348 20:00:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.348 20:00:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.348 ************************************ 00:06:10.348 START TEST app_repeat 00:06:10.348 ************************************ 00:06:10.348 20:00:13 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:10.348 20:00:13 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.348 20:00:13 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.348 20:00:13 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:10.348 20:00:13 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.348 20:00:13 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:10.348 20:00:13 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:10.348 20:00:13 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:10.348 20:00:13 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1926649 00:06:10.348 20:00:13 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:10.348 20:00:13 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.348 20:00:13 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1926649' 00:06:10.348 Process app_repeat pid: 1926649 00:06:10.348 20:00:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:10.348 20:00:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:10.348 spdk_app_start Round 0 00:06:10.348 20:00:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1926649 /var/tmp/spdk-nbd.sock 00:06:10.348 20:00:13 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1926649 ']' 00:06:10.348 20:00:13 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.348 20:00:13 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.348 20:00:13 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.348 20:00:13 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.348 20:00:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.348 [2024-07-24 20:00:13.997038] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:06:10.348 [2024-07-24 20:00:13.997114] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1926649 ] 00:06:10.348 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.348 [2024-07-24 20:00:14.091672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.606 [2024-07-24 20:00:14.294595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.606 [2024-07-24 20:00:14.294602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.863 20:00:14 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.863 20:00:14 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:10.863 20:00:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.121 Malloc0 00:06:11.121 20:00:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.378 Malloc1 00:06:11.638 20:00:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.638 20:00:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.638 20:00:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.638 20:00:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:11.638 20:00:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.638 20:00:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:11.638 20:00:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.638 20:00:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.638 20:00:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.638 20:00:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:11.638 20:00:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.638 20:00:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:11.638 20:00:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:11.638 20:00:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:11.638 20:00:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.638 20:00:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:11.898 /dev/nbd0 00:06:11.898 20:00:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.898 20:00:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.898 20:00:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:11.898 20:00:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:11.898 20:00:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:11.898 20:00:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:11.898 20:00:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:11.898 20:00:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:11.898 20:00:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:11.898 20:00:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:11.898 20:00:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.898 1+0 records in 00:06:11.898 1+0 records out 00:06:11.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330699 s, 12.4 MB/s 00:06:11.898 20:00:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.898 20:00:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:11.898 20:00:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.898 20:00:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:11.898 20:00:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:11.898 20:00:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.898 20:00:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.898 20:00:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.156 /dev/nbd1 00:06:12.156 20:00:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.156 20:00:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.156 20:00:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:12.156 20:00:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:12.156 20:00:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:12.156 20:00:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:12.156 20:00:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:12.156 20:00:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:12.156 20:00:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:12.156 20:00:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:12.156 20:00:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.156 1+0 records in 00:06:12.156 1+0 records out 00:06:12.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201675 s, 20.3 MB/s 00:06:12.156 20:00:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.156 20:00:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:12.156 20:00:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.156 20:00:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:12.156 20:00:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:12.156 20:00:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.156 20:00:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.156 20:00:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.156 20:00:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.156 20:00:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.723 { 00:06:12.723 "nbd_device": "/dev/nbd0", 00:06:12.723 "bdev_name": "Malloc0" 00:06:12.723 }, 00:06:12.723 { 00:06:12.723 "nbd_device": "/dev/nbd1", 00:06:12.723 "bdev_name": "Malloc1" 00:06:12.723 } 00:06:12.723 ]' 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.723 { 00:06:12.723 "nbd_device": "/dev/nbd0", 00:06:12.723 "bdev_name": "Malloc0" 00:06:12.723 }, 00:06:12.723 { 00:06:12.723 "nbd_device": "/dev/nbd1", 00:06:12.723 "bdev_name": "Malloc1" 00:06:12.723 } 00:06:12.723 ]' 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:12.723 /dev/nbd1' 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:12.723 /dev/nbd1' 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:12.723 256+0 records in 00:06:12.723 256+0 records out 00:06:12.723 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00665979 s, 157 MB/s 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:12.723 256+0 records in 00:06:12.723 256+0 records out 00:06:12.723 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301704 s, 34.8 MB/s 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:12.723 256+0 records in 00:06:12.723 256+0 records out 00:06:12.723 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0324814 s, 32.3 MB/s 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.723 20:00:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.288 20:00:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.288 20:00:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.288 20:00:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.289 20:00:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.289 20:00:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.289 20:00:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.289 20:00:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.289 20:00:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.289 20:00:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.289 20:00:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.546 20:00:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.546 20:00:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.546 20:00:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.546 20:00:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.546 20:00:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.546 20:00:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.546 20:00:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.546 20:00:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.546 20:00:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.546 20:00:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.546 20:00:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.805 20:00:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.805 20:00:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.805 20:00:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.805 20:00:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.805 20:00:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.805 20:00:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.805 20:00:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:13.805 20:00:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.805 20:00:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.805 20:00:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.805 20:00:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.805 20:00:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.805 20:00:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:14.370 20:00:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:14.935 [2024-07-24 20:00:18.482662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.935 [2024-07-24 20:00:18.670676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.935 [2024-07-24 20:00:18.670676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.194 [2024-07-24 20:00:18.741315] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.194 [2024-07-24 20:00:18.741395] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.746 20:00:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:17.746 20:00:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:17.746 spdk_app_start Round 1 00:06:17.746 20:00:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1926649 /var/tmp/spdk-nbd.sock 00:06:17.746 20:00:21 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1926649 ']' 00:06:17.746 20:00:21 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.746 20:00:21 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.747 20:00:21 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.747 20:00:21 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.747 20:00:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.012 20:00:21 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.012 20:00:21 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:18.012 20:00:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.269 Malloc0 00:06:18.269 20:00:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.834 Malloc1 00:06:18.834 20:00:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.834 20:00:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.834 20:00:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.834 20:00:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.834 20:00:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.834 20:00:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.834 20:00:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.834 20:00:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.834 20:00:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.834 20:00:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.834 20:00:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.834 20:00:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.834 20:00:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:18.834 20:00:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.834 20:00:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.834 20:00:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.092 /dev/nbd0 00:06:19.092 20:00:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.092 20:00:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.092 20:00:22 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:19.092 20:00:22 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:19.092 20:00:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:19.092 20:00:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:19.092 20:00:22 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:19.092 20:00:22 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:19.092 20:00:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:19.092 20:00:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:19.092 20:00:22 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.092 1+0 records in 00:06:19.092 1+0 records out 00:06:19.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250721 s, 16.3 MB/s 00:06:19.092 20:00:22 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.092 20:00:22 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:19.092 20:00:22 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.092 20:00:22 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:19.092 20:00:22 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:19.092 20:00:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.092 20:00:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.092 20:00:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.350 /dev/nbd1 00:06:19.350 20:00:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.350 20:00:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.350 20:00:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:19.350 20:00:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:19.350 20:00:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:19.351 20:00:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:19.351 20:00:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:19.351 20:00:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:19.351 20:00:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:19.351 20:00:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:19.351 20:00:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.351 1+0 records in 00:06:19.351 1+0 records out 00:06:19.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217235 s, 18.9 MB/s 00:06:19.351 20:00:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.351 20:00:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:19.351 20:00:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.351 20:00:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:19.351 20:00:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:19.351 20:00:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.351 20:00:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.351 20:00:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.351 20:00:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.351 20:00:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.914 { 00:06:19.914 "nbd_device": "/dev/nbd0", 00:06:19.914 "bdev_name": "Malloc0" 00:06:19.914 }, 00:06:19.914 { 00:06:19.914 "nbd_device": "/dev/nbd1", 00:06:19.914 "bdev_name": "Malloc1" 00:06:19.914 } 00:06:19.914 ]' 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.914 { 00:06:19.914 "nbd_device": "/dev/nbd0", 00:06:19.914 "bdev_name": "Malloc0" 00:06:19.914 }, 00:06:19.914 { 00:06:19.914 "nbd_device": "/dev/nbd1", 00:06:19.914 "bdev_name": "Malloc1" 00:06:19.914 } 00:06:19.914 ]' 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.914 /dev/nbd1' 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.914 /dev/nbd1' 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:19.914 256+0 records in 00:06:19.914 256+0 records out 00:06:19.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00574644 s, 182 MB/s 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.914 256+0 records in 00:06:19.914 256+0 records out 00:06:19.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0300945 s, 34.8 MB/s 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.914 256+0 records in 00:06:19.914 256+0 records out 00:06:19.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.032547 s, 32.2 MB/s 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.914 20:00:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:20.172 20:00:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.172 20:00:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:20.172 20:00:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.172 20:00:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:20.172 20:00:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.172 20:00:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.172 20:00:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.172 20:00:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:20.172 20:00:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.172 20:00:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.430 20:00:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.430 20:00:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.430 20:00:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.430 20:00:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.430 20:00:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.430 20:00:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.430 20:00:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.430 20:00:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.430 20:00:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.430 20:00:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.994 20:00:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.994 20:00:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.994 20:00:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.994 20:00:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.994 20:00:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.994 20:00:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.994 20:00:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.994 20:00:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.994 20:00:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.994 20:00:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.994 20:00:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.252 20:00:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.252 20:00:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.252 20:00:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.252 20:00:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.252 20:00:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.252 20:00:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.252 20:00:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:21.252 20:00:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.252 20:00:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.252 20:00:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.252 20:00:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.252 20:00:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.252 20:00:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:21.818 20:00:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:22.076 [2024-07-24 20:00:25.820061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.333 [2024-07-24 20:00:26.028910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.333 [2024-07-24 20:00:26.028917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.333 [2024-07-24 20:00:26.100835] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:22.333 [2024-07-24 20:00:26.100923] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.862 20:00:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:24.862 20:00:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:24.862 spdk_app_start Round 2 00:06:24.862 20:00:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1926649 /var/tmp/spdk-nbd.sock 00:06:24.862 20:00:28 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1926649 ']' 00:06:24.862 20:00:28 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.862 20:00:28 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.862 20:00:28 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.862 20:00:28 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.862 20:00:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.121 20:00:28 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.121 20:00:28 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:25.121 20:00:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.379 Malloc0 00:06:25.379 20:00:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.952 Malloc1 00:06:25.952 20:00:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.952 20:00:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.952 20:00:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.952 20:00:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:25.952 20:00:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.952 20:00:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:25.952 20:00:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.952 20:00:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.952 20:00:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.952 20:00:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:25.952 20:00:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.952 20:00:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:25.952 20:00:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:25.952 20:00:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:25.952 20:00:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.952 20:00:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.519 /dev/nbd0 00:06:26.519 20:00:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.519 20:00:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.519 20:00:30 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:26.519 20:00:30 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:26.519 20:00:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:26.519 20:00:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:26.519 20:00:30 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:26.519 20:00:30 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:26.519 20:00:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:26.519 20:00:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:26.519 20:00:30 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.519 1+0 records in 00:06:26.519 1+0 records out 00:06:26.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212087 s, 19.3 MB/s 00:06:26.519 20:00:30 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.519 20:00:30 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:26.519 20:00:30 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.519 20:00:30 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:26.519 20:00:30 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:26.519 20:00:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.519 20:00:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.519 20:00:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:27.085 /dev/nbd1 00:06:27.085 20:00:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:27.085 20:00:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:27.085 20:00:30 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:27.085 20:00:30 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:27.085 20:00:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:27.085 20:00:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:27.085 20:00:30 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:27.085 20:00:30 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:27.085 20:00:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:27.085 20:00:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:27.085 20:00:30 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.085 1+0 records in 00:06:27.085 1+0 records out 00:06:27.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227084 s, 18.0 MB/s 00:06:27.085 20:00:30 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.085 20:00:30 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:27.085 20:00:30 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.086 20:00:30 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:27.086 20:00:30 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:27.086 20:00:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.086 20:00:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.086 20:00:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.086 20:00:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.086 20:00:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.344 20:00:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.344 { 00:06:27.344 "nbd_device": "/dev/nbd0", 00:06:27.344 "bdev_name": "Malloc0" 00:06:27.344 }, 00:06:27.344 { 00:06:27.344 "nbd_device": "/dev/nbd1", 00:06:27.344 "bdev_name": "Malloc1" 00:06:27.344 } 00:06:27.344 ]' 00:06:27.344 20:00:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.344 { 00:06:27.344 "nbd_device": "/dev/nbd0", 00:06:27.344 "bdev_name": "Malloc0" 00:06:27.344 }, 00:06:27.344 { 00:06:27.344 "nbd_device": "/dev/nbd1", 00:06:27.344 "bdev_name": "Malloc1" 00:06:27.344 } 00:06:27.344 ]' 00:06:27.344 20:00:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.344 /dev/nbd1' 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.344 /dev/nbd1' 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:27.344 256+0 records in 00:06:27.344 256+0 records out 00:06:27.344 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0074703 s, 140 MB/s 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.344 256+0 records in 00:06:27.344 256+0 records out 00:06:27.344 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301502 s, 34.8 MB/s 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.344 256+0 records in 00:06:27.344 256+0 records out 00:06:27.344 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0319601 s, 32.8 MB/s 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.344 20:00:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.910 20:00:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.910 20:00:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.910 20:00:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.910 20:00:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.910 20:00:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.910 20:00:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.910 20:00:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.910 20:00:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.910 20:00:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.910 20:00:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:28.169 20:00:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:28.169 20:00:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:28.169 20:00:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:28.169 20:00:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.169 20:00:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.169 20:00:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:28.169 20:00:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.169 20:00:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.169 20:00:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.169 20:00:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.169 20:00:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.735 20:00:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:28.735 20:00:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:28.735 20:00:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.735 20:00:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:28.735 20:00:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:28.735 20:00:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.735 20:00:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:28.735 20:00:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:28.735 20:00:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:28.735 20:00:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:28.735 20:00:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:28.735 20:00:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:28.735 20:00:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:29.300 20:00:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:29.561 [2024-07-24 20:00:33.266194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.820 [2024-07-24 20:00:33.466685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.820 [2024-07-24 20:00:33.466690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.820 [2024-07-24 20:00:33.538458] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:29.820 [2024-07-24 20:00:33.538531] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:32.348 20:00:35 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1926649 /var/tmp/spdk-nbd.sock 00:06:32.348 20:00:35 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1926649 ']' 00:06:32.348 20:00:35 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:32.348 20:00:35 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.348 20:00:35 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:32.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:32.348 20:00:35 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.348 20:00:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:32.943 20:00:36 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.943 20:00:36 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:32.943 20:00:36 event.app_repeat -- event/event.sh@39 -- # killprocess 1926649 00:06:32.943 20:00:36 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1926649 ']' 00:06:32.943 20:00:36 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1926649 00:06:32.943 20:00:36 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:32.943 20:00:36 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.943 20:00:36 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1926649 00:06:32.943 20:00:36 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:32.943 20:00:36 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:32.943 20:00:36 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1926649' 00:06:32.943 killing process with pid 1926649 00:06:32.943 20:00:36 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1926649 00:06:32.943 20:00:36 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1926649 00:06:33.206 spdk_app_start is called in Round 0. 00:06:33.206 Shutdown signal received, stop current app iteration 00:06:33.206 Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 reinitialization... 00:06:33.206 spdk_app_start is called in Round 1. 00:06:33.206 Shutdown signal received, stop current app iteration 00:06:33.206 Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 reinitialization... 00:06:33.206 spdk_app_start is called in Round 2. 00:06:33.206 Shutdown signal received, stop current app iteration 00:06:33.206 Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 reinitialization... 00:06:33.206 spdk_app_start is called in Round 3. 00:06:33.206 Shutdown signal received, stop current app iteration 00:06:33.206 20:00:36 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:33.206 20:00:36 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:33.206 00:06:33.206 real 0m22.853s 00:06:33.206 user 0m51.200s 00:06:33.206 sys 0m4.565s 00:06:33.206 20:00:36 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.206 20:00:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.206 ************************************ 00:06:33.206 END TEST app_repeat 00:06:33.206 ************************************ 00:06:33.206 20:00:36 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:33.206 20:00:36 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:33.206 20:00:36 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.206 20:00:36 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.206 20:00:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.206 ************************************ 00:06:33.206 START TEST cpu_locks 00:06:33.206 ************************************ 00:06:33.206 20:00:36 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:33.207 * Looking for test storage... 00:06:33.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:33.207 20:00:36 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:33.207 20:00:36 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:33.207 20:00:36 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:33.207 20:00:36 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:33.207 20:00:36 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.207 20:00:36 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.207 20:00:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.207 ************************************ 00:06:33.207 START TEST default_locks 00:06:33.207 ************************************ 00:06:33.466 20:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:33.466 20:00:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1929547 00:06:33.466 20:00:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.466 20:00:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1929547 00:06:33.466 20:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1929547 ']' 00:06:33.466 20:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.466 20:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.466 20:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.466 20:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.466 20:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.466 [2024-07-24 20:00:37.056925] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:06:33.466 [2024-07-24 20:00:37.057038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1929547 ] 00:06:33.466 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.466 [2024-07-24 20:00:37.160602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.725 [2024-07-24 20:00:37.367014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.661 20:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.661 20:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:34.661 20:00:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1929547 00:06:34.661 20:00:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1929547 00:06:34.661 20:00:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.919 lslocks: write error 00:06:34.919 20:00:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1929547 00:06:34.919 20:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1929547 ']' 00:06:34.919 20:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1929547 00:06:34.919 20:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:34.919 20:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.919 20:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1929547 00:06:34.919 20:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.919 20:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.919 20:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1929547' 00:06:34.919 killing process with pid 1929547 00:06:34.919 20:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1929547 00:06:34.919 20:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1929547 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1929547 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1929547 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1929547 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1929547 ']' 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1929547) - No such process 00:06:35.487 ERROR: process (pid: 1929547) is no longer running 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:35.487 00:06:35.487 real 0m2.239s 00:06:35.487 user 0m2.385s 00:06:35.487 sys 0m0.810s 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.487 20:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.487 ************************************ 00:06:35.487 END TEST default_locks 00:06:35.487 ************************************ 00:06:35.487 20:00:39 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:35.487 20:00:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.487 20:00:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.487 20:00:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.748 ************************************ 00:06:35.748 START TEST default_locks_via_rpc 00:06:35.748 ************************************ 00:06:35.748 20:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:35.748 20:00:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1929840 00:06:35.748 20:00:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.748 20:00:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1929840 00:06:35.748 20:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1929840 ']' 00:06:35.748 20:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.748 20:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.748 20:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.748 20:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.748 20:00:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.748 [2024-07-24 20:00:39.358914] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:06:35.748 [2024-07-24 20:00:39.359015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1929840 ] 00:06:35.748 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.748 [2024-07-24 20:00:39.453543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.008 [2024-07-24 20:00:39.657675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.947 20:00:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.947 20:00:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:36.947 20:00:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:36.947 20:00:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.947 20:00:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.947 20:00:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.947 20:00:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:36.948 20:00:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:36.948 20:00:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:36.948 20:00:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:36.948 20:00:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:36.948 20:00:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.948 20:00:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.948 20:00:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.948 20:00:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1929840 00:06:36.948 20:00:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1929840 00:06:36.948 20:00:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.519 20:00:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1929840 00:06:37.519 20:00:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1929840 ']' 00:06:37.519 20:00:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1929840 00:06:37.519 20:00:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:37.519 20:00:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.519 20:00:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1929840 00:06:37.779 20:00:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:37.779 20:00:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:37.779 20:00:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1929840' 00:06:37.779 killing process with pid 1929840 00:06:37.779 20:00:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1929840 00:06:37.779 20:00:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1929840 00:06:38.347 00:06:38.347 real 0m2.652s 00:06:38.347 user 0m2.880s 00:06:38.347 sys 0m0.977s 00:06:38.347 20:00:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.347 20:00:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.347 ************************************ 00:06:38.347 END TEST default_locks_via_rpc 00:06:38.347 ************************************ 00:06:38.347 20:00:41 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:38.347 20:00:41 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.347 20:00:41 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.347 20:00:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.347 ************************************ 00:06:38.347 START TEST non_locking_app_on_locked_coremask 00:06:38.347 ************************************ 00:06:38.347 20:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:38.347 20:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1930139 00:06:38.347 20:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.347 20:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1930139 /var/tmp/spdk.sock 00:06:38.347 20:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1930139 ']' 00:06:38.347 20:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.347 20:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.347 20:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.347 20:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.347 20:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.608 [2024-07-24 20:00:42.142332] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:06:38.608 [2024-07-24 20:00:42.142526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1930139 ] 00:06:38.608 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.608 [2024-07-24 20:00:42.279459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.868 [2024-07-24 20:00:42.483499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.129 20:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.129 20:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:39.129 20:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1930275 00:06:39.129 20:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:39.129 20:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1930275 /var/tmp/spdk2.sock 00:06:39.129 20:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1930275 ']' 00:06:39.129 20:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.129 20:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.129 20:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.129 20:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.129 20:00:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.388 [2024-07-24 20:00:42.919913] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:06:39.388 [2024-07-24 20:00:42.920012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1930275 ] 00:06:39.388 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.388 [2024-07-24 20:00:43.055997] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.388 [2024-07-24 20:00:43.056066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.647 [2024-07-24 20:00:43.414770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.023 20:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.023 20:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:41.023 20:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1930139 00:06:41.023 20:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1930139 00:06:41.023 20:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.957 lslocks: write error 00:06:41.957 20:00:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1930139 00:06:41.957 20:00:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1930139 ']' 00:06:41.957 20:00:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1930139 00:06:41.957 20:00:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:41.957 20:00:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.957 20:00:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1930139 00:06:41.957 20:00:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.957 20:00:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.957 20:00:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1930139' 00:06:41.957 killing process with pid 1930139 00:06:41.957 20:00:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1930139 00:06:41.957 20:00:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1930139 00:06:43.340 20:00:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1930275 00:06:43.340 20:00:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1930275 ']' 00:06:43.340 20:00:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1930275 00:06:43.340 20:00:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:43.340 20:00:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.341 20:00:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1930275 00:06:43.341 20:00:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.341 20:00:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.341 20:00:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1930275' 00:06:43.341 killing process with pid 1930275 00:06:43.341 20:00:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1930275 00:06:43.341 20:00:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1930275 00:06:43.904 00:06:43.904 real 0m5.626s 00:06:43.904 user 0m6.232s 00:06:43.904 sys 0m1.889s 00:06:43.904 20:00:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.904 20:00:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.904 ************************************ 00:06:43.904 END TEST non_locking_app_on_locked_coremask 00:06:43.904 ************************************ 00:06:43.904 20:00:47 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:43.904 20:00:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.904 20:00:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.904 20:00:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.161 ************************************ 00:06:44.162 START TEST locking_app_on_unlocked_coremask 00:06:44.162 ************************************ 00:06:44.162 20:00:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:44.162 20:00:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1930843 00:06:44.162 20:00:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:44.162 20:00:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1930843 /var/tmp/spdk.sock 00:06:44.162 20:00:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1930843 ']' 00:06:44.162 20:00:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.162 20:00:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.162 20:00:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.162 20:00:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.162 20:00:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.162 [2024-07-24 20:00:47.786893] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:06:44.162 [2024-07-24 20:00:47.787014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1930843 ] 00:06:44.162 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.162 [2024-07-24 20:00:47.873170] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.162 [2024-07-24 20:00:47.873223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.423 [2024-07-24 20:00:48.023873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.680 20:00:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.681 20:00:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:44.681 20:00:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1930865 00:06:44.681 20:00:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:44.681 20:00:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1930865 /var/tmp/spdk2.sock 00:06:44.681 20:00:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1930865 ']' 00:06:44.681 20:00:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.681 20:00:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.681 20:00:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.681 20:00:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.681 20:00:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.938 [2024-07-24 20:00:48.499050] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:06:44.938 [2024-07-24 20:00:48.499155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1930865 ] 00:06:44.938 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.938 [2024-07-24 20:00:48.665862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.503 [2024-07-24 20:00:49.067847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.068 20:00:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.068 20:00:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:46.068 20:00:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1930865 00:06:46.068 20:00:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1930865 00:06:46.068 20:00:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.485 lslocks: write error 00:06:47.485 20:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1930843 00:06:47.485 20:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1930843 ']' 00:06:47.485 20:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1930843 00:06:47.485 20:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:47.485 20:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.485 20:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1930843 00:06:47.485 20:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.485 20:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.485 20:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1930843' 00:06:47.485 killing process with pid 1930843 00:06:47.485 20:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1930843 00:06:47.485 20:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1930843 00:06:48.857 20:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1930865 00:06:48.857 20:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1930865 ']' 00:06:48.857 20:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1930865 00:06:48.857 20:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:48.857 20:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.857 20:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1930865 00:06:48.857 20:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.857 20:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.857 20:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1930865' 00:06:48.857 killing process with pid 1930865 00:06:48.857 20:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1930865 00:06:48.857 20:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1930865 00:06:49.422 00:06:49.422 real 0m5.464s 00:06:49.422 user 0m5.658s 00:06:49.422 sys 0m1.867s 00:06:49.422 20:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.422 20:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.422 ************************************ 00:06:49.422 END TEST locking_app_on_unlocked_coremask 00:06:49.422 ************************************ 00:06:49.422 20:00:53 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:49.422 20:00:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.422 20:00:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.680 20:00:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.680 ************************************ 00:06:49.680 START TEST locking_app_on_locked_coremask 00:06:49.680 ************************************ 00:06:49.680 20:00:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:49.680 20:00:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1931535 00:06:49.680 20:00:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:49.680 20:00:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1931535 /var/tmp/spdk.sock 00:06:49.680 20:00:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1931535 ']' 00:06:49.680 20:00:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.680 20:00:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.680 20:00:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.680 20:00:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.680 20:00:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.680 [2024-07-24 20:00:53.351183] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:06:49.680 [2024-07-24 20:00:53.351363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1931535 ] 00:06:49.680 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.938 [2024-07-24 20:00:53.484861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.938 [2024-07-24 20:00:53.705154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.506 20:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.506 20:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:50.506 20:00:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1931552 00:06:50.506 20:00:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:50.506 20:00:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1931552 /var/tmp/spdk2.sock 00:06:50.506 20:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:50.506 20:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1931552 /var/tmp/spdk2.sock 00:06:50.506 20:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:50.506 20:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.506 20:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:50.506 20:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.506 20:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1931552 /var/tmp/spdk2.sock 00:06:50.506 20:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1931552 ']' 00:06:50.506 20:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.506 20:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.506 20:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.506 20:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.506 20:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.506 [2024-07-24 20:00:54.181246] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:06:50.506 [2024-07-24 20:00:54.181348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1931552 ] 00:06:50.506 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.764 [2024-07-24 20:00:54.342238] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1931535 has claimed it. 00:06:50.764 [2024-07-24 20:00:54.342364] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:51.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1931552) - No such process 00:06:51.329 ERROR: process (pid: 1931552) is no longer running 00:06:51.329 20:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.329 20:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:51.329 20:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:51.329 20:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:51.329 20:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:51.329 20:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:51.329 20:00:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1931535 00:06:51.329 20:00:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1931535 00:06:51.329 20:00:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.264 lslocks: write error 00:06:52.264 20:00:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1931535 00:06:52.264 20:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1931535 ']' 00:06:52.264 20:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1931535 00:06:52.264 20:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:52.264 20:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.264 20:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1931535 00:06:52.264 20:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:52.264 20:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:52.264 20:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1931535' 00:06:52.264 killing process with pid 1931535 00:06:52.264 20:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1931535 00:06:52.264 20:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1931535 00:06:52.833 00:06:52.833 real 0m3.164s 00:06:52.833 user 0m3.553s 00:06:52.834 sys 0m1.161s 00:06:52.834 20:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.834 20:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.834 ************************************ 00:06:52.834 END TEST locking_app_on_locked_coremask 00:06:52.834 ************************************ 00:06:52.834 20:00:56 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:52.834 20:00:56 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:52.834 20:00:56 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.834 20:00:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.834 ************************************ 00:06:52.834 START TEST locking_overlapped_coremask 00:06:52.834 ************************************ 00:06:52.834 20:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:52.834 20:00:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1931846 00:06:52.834 20:00:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:52.834 20:00:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1931846 /var/tmp/spdk.sock 00:06:52.834 20:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1931846 ']' 00:06:52.834 20:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.834 20:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.834 20:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.834 20:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.834 20:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.834 [2024-07-24 20:00:56.524371] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:06:52.834 [2024-07-24 20:00:56.524483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1931846 ] 00:06:52.834 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.093 [2024-07-24 20:00:56.621525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.093 [2024-07-24 20:00:56.833394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.093 [2024-07-24 20:00:56.833472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.093 [2024-07-24 20:00:56.833479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.657 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.657 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:53.657 20:00:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1931978 00:06:53.657 20:00:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:53.657 20:00:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1931978 /var/tmp/spdk2.sock 00:06:53.657 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:53.657 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1931978 /var/tmp/spdk2.sock 00:06:53.657 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:53.657 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.657 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:53.657 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.657 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1931978 /var/tmp/spdk2.sock 00:06:53.657 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1931978 ']' 00:06:53.657 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.657 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.657 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.657 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.657 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.657 [2024-07-24 20:00:57.214983] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:06:53.657 [2024-07-24 20:00:57.215089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1931978 ] 00:06:53.657 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.657 [2024-07-24 20:00:57.332673] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1931846 has claimed it. 00:06:53.657 [2024-07-24 20:00:57.332748] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:54.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1931978) - No such process 00:06:54.226 ERROR: process (pid: 1931978) is no longer running 00:06:54.226 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.226 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:54.226 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:54.226 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:54.226 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:54.226 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:54.226 20:00:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:54.226 20:00:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:54.226 20:00:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:54.226 20:00:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:54.226 20:00:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1931846 00:06:54.226 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1931846 ']' 00:06:54.226 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1931846 00:06:54.226 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:54.226 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.226 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1931846 00:06:54.226 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.226 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.226 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1931846' 00:06:54.226 killing process with pid 1931846 00:06:54.226 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1931846 00:06:54.226 20:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1931846 00:06:55.164 00:06:55.164 real 0m2.121s 00:06:55.164 user 0m5.348s 00:06:55.164 sys 0m0.601s 00:06:55.164 20:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.164 20:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.164 ************************************ 00:06:55.164 END TEST locking_overlapped_coremask 00:06:55.164 ************************************ 00:06:55.164 20:00:58 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:55.164 20:00:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.164 20:00:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.164 20:00:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.164 ************************************ 00:06:55.164 START TEST locking_overlapped_coremask_via_rpc 00:06:55.164 ************************************ 00:06:55.164 20:00:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:55.164 20:00:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1932142 00:06:55.164 20:00:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:55.164 20:00:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1932142 /var/tmp/spdk.sock 00:06:55.164 20:00:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1932142 ']' 00:06:55.165 20:00:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.165 20:00:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.165 20:00:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.165 20:00:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.165 20:00:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.165 [2024-07-24 20:00:58.772219] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:06:55.165 [2024-07-24 20:00:58.772397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1932142 ] 00:06:55.165 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.165 [2024-07-24 20:00:58.912721] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:55.165 [2024-07-24 20:00:58.912807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.423 [2024-07-24 20:00:59.102815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.423 [2024-07-24 20:00:59.102887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.423 [2024-07-24 20:00:59.102892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.682 20:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.682 20:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:55.682 20:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1932277 00:06:55.682 20:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:55.682 20:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1932277 /var/tmp/spdk2.sock 00:06:55.682 20:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1932277 ']' 00:06:55.682 20:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.682 20:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.682 20:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.682 20:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.682 20:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.941 [2024-07-24 20:00:59.504713] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:06:55.941 [2024-07-24 20:00:59.504920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1932277 ] 00:06:55.941 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.941 [2024-07-24 20:00:59.650323] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:55.941 [2024-07-24 20:00:59.650378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.199 [2024-07-24 20:00:59.940608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.199 [2024-07-24 20:00:59.944501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:56.199 [2024-07-24 20:00:59.944505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.576 20:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.576 20:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:57.576 20:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:57.576 20:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.576 20:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.576 20:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.576 20:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.576 20:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:57.576 20:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.576 20:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:57.576 20:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.576 20:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:57.576 20:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.576 20:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.576 20:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.576 20:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.576 [2024-07-24 20:01:00.997614] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1932142 has claimed it. 00:06:57.576 request: 00:06:57.576 { 00:06:57.576 "method": "framework_enable_cpumask_locks", 00:06:57.576 "req_id": 1 00:06:57.576 } 00:06:57.576 Got JSON-RPC error response 00:06:57.576 response: 00:06:57.576 { 00:06:57.576 "code": -32603, 00:06:57.576 "message": "Failed to claim CPU core: 2" 00:06:57.576 } 00:06:57.576 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:57.576 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:57.576 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:57.576 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:57.576 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:57.576 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1932142 /var/tmp/spdk.sock 00:06:57.576 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1932142 ']' 00:06:57.576 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.576 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.576 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.576 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.576 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.833 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.833 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:57.833 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1932277 /var/tmp/spdk2.sock 00:06:57.834 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1932277 ']' 00:06:57.834 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.834 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.834 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.834 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.834 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.399 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.399 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:58.399 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:58.399 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:58.399 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:58.399 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:58.399 00:06:58.399 real 0m3.313s 00:06:58.399 user 0m2.164s 00:06:58.399 sys 0m0.303s 00:06:58.399 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.399 20:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.399 ************************************ 00:06:58.399 END TEST locking_overlapped_coremask_via_rpc 00:06:58.399 ************************************ 00:06:58.399 20:01:01 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:58.399 20:01:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1932142 ]] 00:06:58.399 20:01:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1932142 00:06:58.399 20:01:01 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1932142 ']' 00:06:58.399 20:01:01 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1932142 00:06:58.399 20:01:01 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:58.399 20:01:02 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.399 20:01:02 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1932142 00:06:58.399 20:01:02 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.399 20:01:02 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.399 20:01:02 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1932142' 00:06:58.399 killing process with pid 1932142 00:06:58.399 20:01:02 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1932142 00:06:58.399 20:01:02 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1932142 00:06:58.966 20:01:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1932277 ]] 00:06:58.966 20:01:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1932277 00:06:58.966 20:01:02 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1932277 ']' 00:06:58.966 20:01:02 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1932277 00:06:58.966 20:01:02 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:58.966 20:01:02 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.966 20:01:02 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1932277 00:06:58.966 20:01:02 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:58.966 20:01:02 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:58.966 20:01:02 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1932277' 00:06:58.966 killing process with pid 1932277 00:06:58.966 20:01:02 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1932277 00:06:58.966 20:01:02 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1932277 00:06:59.538 20:01:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:59.538 20:01:03 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:59.538 20:01:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1932142 ]] 00:06:59.538 20:01:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1932142 00:06:59.538 20:01:03 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1932142 ']' 00:06:59.538 20:01:03 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1932142 00:06:59.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1932142) - No such process 00:06:59.538 20:01:03 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1932142 is not found' 00:06:59.538 Process with pid 1932142 is not found 00:06:59.538 20:01:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1932277 ]] 00:06:59.538 20:01:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1932277 00:06:59.538 20:01:03 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1932277 ']' 00:06:59.538 20:01:03 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1932277 00:06:59.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1932277) - No such process 00:06:59.538 20:01:03 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1932277 is not found' 00:06:59.796 Process with pid 1932277 is not found 00:06:59.796 20:01:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:59.796 00:06:59.796 real 0m26.434s 00:06:59.796 user 0m45.457s 00:06:59.796 sys 0m8.846s 00:06:59.796 20:01:03 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.796 20:01:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.796 ************************************ 00:06:59.796 END TEST cpu_locks 00:06:59.796 ************************************ 00:06:59.796 00:06:59.796 real 0m56.522s 00:06:59.796 user 1m47.320s 00:06:59.796 sys 0m14.636s 00:06:59.796 20:01:03 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.796 20:01:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:59.796 ************************************ 00:06:59.796 END TEST event 00:06:59.796 ************************************ 00:06:59.796 20:01:03 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:59.796 20:01:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:59.796 20:01:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.796 20:01:03 -- common/autotest_common.sh@10 -- # set +x 00:06:59.796 ************************************ 00:06:59.796 START TEST thread 00:06:59.797 ************************************ 00:06:59.797 20:01:03 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:59.797 * Looking for test storage... 00:06:59.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:59.797 20:01:03 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:59.797 20:01:03 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:59.797 20:01:03 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.797 20:01:03 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.797 ************************************ 00:06:59.797 START TEST thread_poller_perf 00:06:59.797 ************************************ 00:06:59.797 20:01:03 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:59.797 [2024-07-24 20:01:03.570758] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:06:59.797 [2024-07-24 20:01:03.570907] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1932781 ] 00:07:00.054 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.054 [2024-07-24 20:01:03.687393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.313 [2024-07-24 20:01:03.887466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.313 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:01.737 ====================================== 00:07:01.737 busy:2720886635 (cyc) 00:07:01.737 total_run_count: 144000 00:07:01.737 tsc_hz: 2700000000 (cyc) 00:07:01.737 ====================================== 00:07:01.737 poller_cost: 18895 (cyc), 6998 (nsec) 00:07:01.737 00:07:01.737 real 0m1.537s 00:07:01.737 user 0m1.388s 00:07:01.737 sys 0m0.136s 00:07:01.737 20:01:05 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.737 20:01:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.737 ************************************ 00:07:01.737 END TEST thread_poller_perf 00:07:01.737 ************************************ 00:07:01.737 20:01:05 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:01.737 20:01:05 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:01.737 20:01:05 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.737 20:01:05 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.737 ************************************ 00:07:01.737 START TEST thread_poller_perf 00:07:01.737 ************************************ 00:07:01.737 20:01:05 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:01.737 [2024-07-24 20:01:05.177850] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:07:01.737 [2024-07-24 20:01:05.177995] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1933060 ] 00:07:01.737 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.737 [2024-07-24 20:01:05.303623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.737 [2024-07-24 20:01:05.511667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.737 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:03.116 ====================================== 00:07:03.116 busy:2704229928 (cyc) 00:07:03.116 total_run_count: 1874000 00:07:03.116 tsc_hz: 2700000000 (cyc) 00:07:03.116 ====================================== 00:07:03.116 poller_cost: 1443 (cyc), 534 (nsec) 00:07:03.116 00:07:03.116 real 0m1.550s 00:07:03.116 user 0m1.384s 00:07:03.116 sys 0m0.153s 00:07:03.116 20:01:06 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.116 20:01:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:03.116 ************************************ 00:07:03.116 END TEST thread_poller_perf 00:07:03.116 ************************************ 00:07:03.116 20:01:06 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:03.116 00:07:03.116 real 0m3.311s 00:07:03.116 user 0m2.857s 00:07:03.116 sys 0m0.441s 00:07:03.116 20:01:06 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.116 20:01:06 thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.116 ************************************ 00:07:03.116 END TEST thread 00:07:03.116 ************************************ 00:07:03.116 20:01:06 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:07:03.116 20:01:06 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:03.116 20:01:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.116 20:01:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.116 20:01:06 -- common/autotest_common.sh@10 -- # set +x 00:07:03.116 ************************************ 00:07:03.116 START TEST app_cmdline 00:07:03.116 ************************************ 00:07:03.116 20:01:06 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:03.116 * Looking for test storage... 00:07:03.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:03.116 20:01:06 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:03.116 20:01:06 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1933259 00:07:03.116 20:01:06 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:03.116 20:01:06 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1933259 00:07:03.116 20:01:06 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1933259 ']' 00:07:03.116 20:01:06 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.116 20:01:06 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.116 20:01:06 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.116 20:01:06 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.116 20:01:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:03.375 [2024-07-24 20:01:06.931744] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:07:03.375 [2024-07-24 20:01:06.931840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1933259 ] 00:07:03.375 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.375 [2024-07-24 20:01:07.007703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.375 [2024-07-24 20:01:07.155710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.944 20:01:07 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.944 20:01:07 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:03.944 20:01:07 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:04.202 { 00:07:04.202 "version": "SPDK v24.09-pre git sha1 da8d49b2f", 00:07:04.202 "fields": { 00:07:04.202 "major": 24, 00:07:04.202 "minor": 9, 00:07:04.202 "patch": 0, 00:07:04.202 "suffix": "-pre", 00:07:04.202 "commit": "da8d49b2f" 00:07:04.202 } 00:07:04.202 } 00:07:04.202 20:01:07 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:04.202 20:01:07 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:04.202 20:01:07 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:04.202 20:01:07 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:04.202 20:01:07 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:04.202 20:01:07 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:04.202 20:01:07 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.202 20:01:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:04.202 20:01:07 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:04.202 20:01:07 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.460 20:01:08 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:04.460 20:01:08 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:04.460 20:01:08 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.460 20:01:08 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:04.460 20:01:08 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.460 20:01:08 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.460 20:01:08 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.460 20:01:08 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.460 20:01:08 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.460 20:01:08 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.460 20:01:08 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.460 20:01:08 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.460 20:01:08 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:04.460 20:01:08 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.718 request: 00:07:04.718 { 00:07:04.718 "method": "env_dpdk_get_mem_stats", 00:07:04.718 "req_id": 1 00:07:04.718 } 00:07:04.718 Got JSON-RPC error response 00:07:04.718 response: 00:07:04.718 { 00:07:04.718 "code": -32601, 00:07:04.718 "message": "Method not found" 00:07:04.718 } 00:07:04.977 20:01:08 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:04.977 20:01:08 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.977 20:01:08 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.977 20:01:08 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.977 20:01:08 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1933259 00:07:04.977 20:01:08 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1933259 ']' 00:07:04.977 20:01:08 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1933259 00:07:04.977 20:01:08 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:04.977 20:01:08 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.977 20:01:08 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1933259 00:07:04.977 20:01:08 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.977 20:01:08 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.977 20:01:08 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1933259' 00:07:04.977 killing process with pid 1933259 00:07:04.977 20:01:08 app_cmdline -- common/autotest_common.sh@969 -- # kill 1933259 00:07:04.977 20:01:08 app_cmdline -- common/autotest_common.sh@974 -- # wait 1933259 00:07:05.543 00:07:05.543 real 0m2.374s 00:07:05.543 user 0m2.965s 00:07:05.543 sys 0m0.679s 00:07:05.543 20:01:09 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.543 20:01:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:05.543 ************************************ 00:07:05.543 END TEST app_cmdline 00:07:05.543 ************************************ 00:07:05.543 20:01:09 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:05.543 20:01:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.543 20:01:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.543 20:01:09 -- common/autotest_common.sh@10 -- # set +x 00:07:05.543 ************************************ 00:07:05.543 START TEST version 00:07:05.543 ************************************ 00:07:05.543 20:01:09 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:05.543 * Looking for test storage... 00:07:05.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:05.543 20:01:09 version -- app/version.sh@17 -- # get_header_version major 00:07:05.543 20:01:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:05.543 20:01:09 version -- app/version.sh@14 -- # cut -f2 00:07:05.543 20:01:09 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.543 20:01:09 version -- app/version.sh@17 -- # major=24 00:07:05.543 20:01:09 version -- app/version.sh@18 -- # get_header_version minor 00:07:05.543 20:01:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:05.543 20:01:09 version -- app/version.sh@14 -- # cut -f2 00:07:05.543 20:01:09 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.543 20:01:09 version -- app/version.sh@18 -- # minor=9 00:07:05.543 20:01:09 version -- app/version.sh@19 -- # get_header_version patch 00:07:05.543 20:01:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:05.543 20:01:09 version -- app/version.sh@14 -- # cut -f2 00:07:05.543 20:01:09 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.543 20:01:09 version -- app/version.sh@19 -- # patch=0 00:07:05.543 20:01:09 version -- app/version.sh@20 -- # get_header_version suffix 00:07:05.806 20:01:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:05.806 20:01:09 version -- app/version.sh@14 -- # cut -f2 00:07:05.806 20:01:09 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.806 20:01:09 version -- app/version.sh@20 -- # suffix=-pre 00:07:05.806 20:01:09 version -- app/version.sh@22 -- # version=24.9 00:07:05.806 20:01:09 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:05.806 20:01:09 version -- app/version.sh@28 -- # version=24.9rc0 00:07:05.806 20:01:09 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:05.806 20:01:09 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:05.806 20:01:09 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:05.806 20:01:09 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:05.806 00:07:05.806 real 0m0.161s 00:07:05.806 user 0m0.101s 00:07:05.806 sys 0m0.085s 00:07:05.806 20:01:09 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.806 20:01:09 version -- common/autotest_common.sh@10 -- # set +x 00:07:05.807 ************************************ 00:07:05.807 END TEST version 00:07:05.807 ************************************ 00:07:05.807 20:01:09 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:07:05.807 20:01:09 -- spdk/autotest.sh@202 -- # uname -s 00:07:05.807 20:01:09 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:07:05.807 20:01:09 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:05.807 20:01:09 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:05.807 20:01:09 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:07:05.807 20:01:09 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:05.807 20:01:09 -- spdk/autotest.sh@264 -- # timing_exit lib 00:07:05.807 20:01:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:05.807 20:01:09 -- common/autotest_common.sh@10 -- # set +x 00:07:05.807 20:01:09 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:05.807 20:01:09 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:07:05.807 20:01:09 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:07:05.807 20:01:09 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:07:05.807 20:01:09 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:07:05.807 20:01:09 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:07:05.807 20:01:09 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:05.807 20:01:09 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:05.807 20:01:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.807 20:01:09 -- common/autotest_common.sh@10 -- # set +x 00:07:05.807 ************************************ 00:07:05.807 START TEST nvmf_tcp 00:07:05.807 ************************************ 00:07:05.807 20:01:09 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:05.807 * Looking for test storage... 00:07:05.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:05.807 20:01:09 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:05.807 20:01:09 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:05.807 20:01:09 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:05.807 20:01:09 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:05.807 20:01:09 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.807 20:01:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.807 ************************************ 00:07:05.807 START TEST nvmf_target_core 00:07:05.807 ************************************ 00:07:05.807 20:01:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:06.068 * Looking for test storage... 00:07:06.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:06.068 ************************************ 00:07:06.068 START TEST nvmf_abort 00:07:06.068 ************************************ 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:06.068 * Looking for test storage... 00:07:06.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.068 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:06.069 20:01:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:09.360 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:09.360 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:09.360 Found net devices under 0000:84:00.0: cvl_0_0 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:09.360 Found net devices under 0000:84:00.1: cvl_0_1 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.360 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:09.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:09.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:07:09.361 00:07:09.361 --- 10.0.0.2 ping statistics --- 00:07:09.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.361 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:09.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:09.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:07:09.361 00:07:09.361 --- 10.0.0.1 ping statistics --- 00:07:09.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.361 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1935464 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1935464 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1935464 ']' 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.361 20:01:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.361 [2024-07-24 20:01:12.810562] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:07:09.361 [2024-07-24 20:01:12.810675] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.361 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.361 [2024-07-24 20:01:12.902019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.361 [2024-07-24 20:01:13.046075] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:09.361 [2024-07-24 20:01:13.046137] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:09.361 [2024-07-24 20:01:13.046157] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:09.361 [2024-07-24 20:01:13.046172] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:09.361 [2024-07-24 20:01:13.046186] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:09.361 [2024-07-24 20:01:13.046314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.361 [2024-07-24 20:01:13.046393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.361 [2024-07-24 20:01:13.046397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.620 [2024-07-24 20:01:13.223944] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.620 Malloc0 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.620 Delay0 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.620 [2024-07-24 20:01:13.301773] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.620 20:01:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:09.620 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.878 [2024-07-24 20:01:13.408935] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:11.778 Initializing NVMe Controllers 00:07:11.778 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:11.778 controller IO queue size 128 less than required 00:07:11.778 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:11.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:11.778 Initialization complete. Launching workers. 00:07:11.778 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 23898 00:07:11.778 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 23959, failed to submit 62 00:07:11.778 success 23902, unsuccess 57, failed 0 00:07:11.778 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:11.778 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.778 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:11.778 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.778 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:11.778 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:11.778 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:11.778 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:11.778 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:11.778 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:11.778 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:11.778 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:11.778 rmmod nvme_tcp 00:07:11.778 rmmod nvme_fabrics 00:07:11.778 rmmod nvme_keyring 00:07:12.036 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:12.036 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:12.036 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:12.036 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1935464 ']' 00:07:12.036 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1935464 00:07:12.036 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1935464 ']' 00:07:12.036 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1935464 00:07:12.036 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:12.036 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.036 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1935464 00:07:12.036 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:12.036 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:12.036 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1935464' 00:07:12.036 killing process with pid 1935464 00:07:12.036 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1935464 00:07:12.036 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1935464 00:07:12.294 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:12.294 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:12.294 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:12.294 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:12.294 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:12.294 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.294 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:12.294 20:01:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:14.847 00:07:14.847 real 0m8.392s 00:07:14.847 user 0m11.141s 00:07:14.847 sys 0m3.402s 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:14.847 ************************************ 00:07:14.847 END TEST nvmf_abort 00:07:14.847 ************************************ 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:14.847 ************************************ 00:07:14.847 START TEST nvmf_ns_hotplug_stress 00:07:14.847 ************************************ 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:14.847 * Looking for test storage... 00:07:14.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.847 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:14.848 20:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:17.383 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:17.383 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:17.383 Found net devices under 0000:84:00.0: cvl_0_0 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:17.383 Found net devices under 0000:84:00.1: cvl_0_1 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:17.383 20:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:17.383 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:17.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:17.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:07:17.384 00:07:17.384 --- 10.0.0.2 ping statistics --- 00:07:17.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.384 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:17.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:17.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:07:17.384 00:07:17.384 --- 10.0.0.1 ping statistics --- 00:07:17.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.384 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1937921 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1937921 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1937921 ']' 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.384 20:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:17.642 [2024-07-24 20:01:21.180807] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:07:17.643 [2024-07-24 20:01:21.180889] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.643 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.643 [2024-07-24 20:01:21.257071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.643 [2024-07-24 20:01:21.398633] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.643 [2024-07-24 20:01:21.398705] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.643 [2024-07-24 20:01:21.398746] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.643 [2024-07-24 20:01:21.398772] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.643 [2024-07-24 20:01:21.398795] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.643 [2024-07-24 20:01:21.398920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.643 [2024-07-24 20:01:21.398987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.643 [2024-07-24 20:01:21.399000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.029 20:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.029 20:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:19.029 20:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:19.029 20:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:19.029 20:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:19.029 20:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.029 20:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:19.029 20:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:19.029 [2024-07-24 20:01:22.773907] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.294 20:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:19.858 20:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.115 [2024-07-24 20:01:23.679348] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.115 20:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:20.372 20:01:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:20.938 Malloc0 00:07:20.938 20:01:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:21.503 Delay0 00:07:21.503 20:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.069 20:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:22.327 NULL1 00:07:22.585 20:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:22.843 20:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1938526 00:07:22.843 20:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:22.843 20:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1938526 00:07:22.843 20:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.843 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.216 Read completed with error (sct=0, sc=11) 00:07:24.216 20:01:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.216 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.216 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.216 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.473 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.473 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.473 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.473 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.473 20:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:24.473 20:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:25.039 true 00:07:25.039 20:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1938526 00:07:25.039 20:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.604 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.604 20:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.604 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.604 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.121 20:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:26.121 20:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:26.380 true 00:07:26.380 20:01:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1938526 00:07:26.380 20:01:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.315 20:01:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.315 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.315 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.583 20:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:27.583 20:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:28.155 true 00:07:28.155 20:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1938526 00:07:28.155 20:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.530 20:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.788 20:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:29.788 20:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:30.046 true 00:07:30.304 20:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1938526 00:07:30.304 20:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.871 20:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.129 20:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:31.129 20:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:31.703 true 00:07:31.703 20:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1938526 00:07:31.703 20:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.269 20:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.531 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.531 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.531 20:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:32.531 20:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:33.113 true 00:07:33.113 20:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1938526 00:07:33.113 20:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.702 20:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.959 20:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:33.959 20:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:34.525 true 00:07:34.525 20:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1938526 00:07:34.525 20:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.899 20:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.899 20:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:35.899 20:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:36.464 true 00:07:36.464 20:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1938526 00:07:36.464 20:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.838 20:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.354 20:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:38.354 20:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:38.611 true 00:07:38.611 20:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1938526 00:07:38.611 20:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.177 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.177 20:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.177 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.694 20:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:39.694 20:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:39.951 true 00:07:39.951 20:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1938526 00:07:39.951 20:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.517 20:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.033 20:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:41.033 20:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:41.597 true 00:07:41.597 20:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1938526 00:07:41.597 20:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.163 20:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.421 20:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:42.421 20:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:42.682 true 00:07:42.682 20:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1938526 00:07:42.682 20:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.940 20:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.471 [2024-07-24 20:01:46.989896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.990042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.990153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.990258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.990370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.990484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.990596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.990702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.990812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.990922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.991048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.991162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.991279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.991391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.991513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.991626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.991731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.991839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.991938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.992042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.992158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.992261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.992363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.992485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.992591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.471 [2024-07-24 20:01:46.992688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.992800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.992907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.993018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.993122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.993227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.993323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.993426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.993542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.993643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.993746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.993850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.993950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.994052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.994159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.994266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.994379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.994507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.994622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.994733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.994841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.994951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.995063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.995177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.995280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.995388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.995504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.995612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.995714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.995820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.995931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.996041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.996156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.996260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.996366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.996482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.996583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.996693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.996803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.997112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.997206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.997309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.997416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.997535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.997646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.997756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.997861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.997959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.998061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.998164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.998273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.998376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.998486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.998588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.998691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.998804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.999475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.999575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.999682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.999787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.999889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:46.999995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.000095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.000197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.000298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.000408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.000529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.000642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.000750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.000856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.000964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.001072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.001182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.001293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.001401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.001522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.001630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.001742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.001848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.001956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.002056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.002159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.002266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.002377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.002482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.002585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.002692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.002797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.002901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.002999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.003103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.003208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.003308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.003418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.003542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.003649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.003759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.003870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.472 [2024-07-24 20:01:47.003981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.004088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.004193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.004299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.004406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.004528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.004633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.004742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.004854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.004959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.005068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.005176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.005285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.005392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.005510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.005614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.005723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.005830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.005944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.006047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.006153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.006256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.006565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.006664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.006774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.006887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.006989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.007093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.007193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.007295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.007401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.007518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.007624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.007732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.007838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.007932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.008033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.008133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.008242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.008348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.008461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.008567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.008668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.008771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.008871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.008976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.009079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.009197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.009298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.009404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.009532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.009652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.009758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.009869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.009979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.010085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.010195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.010302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.010409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.010536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.010644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.010757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.010860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.010968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.011068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.011175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.011277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.011379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.012671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.012776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.012886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.012994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.013099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.013209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.013326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.013439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.013557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.013663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.013767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.013879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.013991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.014098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.014212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.014323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.014447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.014553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.014663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.014776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.014884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.014991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.473 [2024-07-24 20:01:47.015097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.015207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.015318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.015425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.015542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.015654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.015758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.015865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.015958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.016060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.016161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.016272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.016377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.016497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.016607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.016706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.016815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.016917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.017029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.017132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.017234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.017330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.017444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.017545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.017649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.017753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.017868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.017968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.018077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.018178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.018279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.018391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.018511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.018615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.018725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.018830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.018953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.019052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.019156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.019263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 20:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:43.474 [2024-07-24 20:01:47.019369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.019486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 20:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:43.474 [2024-07-24 20:01:47.019790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.019888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.020003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.020107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.020208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.020318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.020424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.020562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.020652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.020768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.020879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.020990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.021094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.021193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.021306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.021413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.021546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.022175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.022279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.022394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.022520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.022638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.022761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.022873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.022978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.023090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.023196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.023306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.023412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.023547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.023651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.023769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.023874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.023978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.024094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.024200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.024307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.024414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.024550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.024662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.024791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.024891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.024999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.025105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.025210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.025316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.025422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.025569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.474 [2024-07-24 20:01:47.025683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.025793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.025902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.026008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.026118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.026215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.026318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.026426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.026550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.026647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.026774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.026883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.026990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.027097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.027203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.027309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.027414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.027545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.027645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.027770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.027881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.027992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.028100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.028211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.028317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.028439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.028562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.028664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.028783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.028889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.028997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.029098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.029603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.029722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.029817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.029931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.030035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.030143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.030247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.030358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.030483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.030590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.030702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.030805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.030906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.031015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.031121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.031223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.031335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.031468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.031570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.031687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.031804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.031907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.032012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.032112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.032221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.032332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.032451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.032576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.032689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.032792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.032893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.033003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.033112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.033226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.033339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.033461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.033581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.033715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.033825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.033933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.034043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.034151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.034260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.034373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.034507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.034612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.034738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.034840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.034945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.035049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.035153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.035256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.035361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.035492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.035595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.035718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.035816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.035917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.036021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.036128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.036238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.036340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.475 [2024-07-24 20:01:47.036474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.036583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.037807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.037913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.038032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.038144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.038253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.038363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.038495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.038603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.038726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.038832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.038942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.039048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.039153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.039258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.039365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.039498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.039590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.039703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.039809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.039912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.040022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.040127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.040223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.040325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.040447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.040562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.040661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.040785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.040881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.040983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.041085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.041193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.041305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.041417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.041555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.041654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.041777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.041891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.041994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.042110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.042219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.042328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.042458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.042571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.042685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.042789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.042893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.042997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.043104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.043207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.043316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.043418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.043546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.043662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.043788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.043891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.043997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.044097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.044211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.044323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.044444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.044560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.044662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.044785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.045114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.045215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.045332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.045473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.045580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.045703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.045816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.045925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.046030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.046142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.046256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.046358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.476 [2024-07-24 20:01:47.046492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.046596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.046718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.046826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.047465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.047573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.047683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.047793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.047897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.048001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.048111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.048212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.048315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.048419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.048535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.048644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.048750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.048858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.048966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.049075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.049179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.049287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.049395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.049518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.049634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.049755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.049864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.049978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.050088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.050197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.050307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.050417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.050538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.050646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.050755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.050864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.050957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.051066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.051173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.051279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.051383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.051502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.051604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.051706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.051814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.051919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.052021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.052134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.052238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.052344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.052463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.052569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.052672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.052775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.052881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.052983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.053089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.053206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.053314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.053424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.053540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.053648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.053762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.053864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.053973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.054084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.054195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.054302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.054634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.054737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.054847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.054951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.055061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.055178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.055287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.055389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.055499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.055606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.055712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.055818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.055922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.056032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.056129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.056236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.056349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.056462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.056564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.056673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.056778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.056890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.057000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.057103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.057205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.057313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.057416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.057528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.057627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.477 [2024-07-24 20:01:47.057736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.057846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.057952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.058058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.058165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.058273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.058382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.058502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.058608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.058721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.058830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.058954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.059056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.059179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.059293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.059403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.059523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.059634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.060470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.060572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.060684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.060784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.060892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.061003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.061106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.061216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.061332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.061443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.061546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.061654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.061761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.061866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.061968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.062072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.062173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.062276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.062384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.062507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.062620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.062727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.062835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.062939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.063047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.063155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.063265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.063376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.063495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.063605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.063712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.063818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.063925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.064030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.064138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.064245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.064360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.064474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.064581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.064683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.064793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.064901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.065008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.065112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.065215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.065319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.065441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.065548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.065655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.065757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.065869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.065977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.066096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.066197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.066302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.066406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.066522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.066627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.066731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.066841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.066949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.067057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.067166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.067278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.067626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.067733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.067848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.067951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.068065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.068169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.068276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.068383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.068499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.068620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.068735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.068844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.068953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.069061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.069164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.069275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.478 [2024-07-24 20:01:47.070202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.070306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.070419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.070532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.070634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.070735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.070841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.070946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.071049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.071153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.071256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.071361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.071481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.071589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.071702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.071814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.071922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.072034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.072146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.072254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.072365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.072487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.072598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.072706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.072816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.072932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.073040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.073149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.073257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.073364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.073486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.073595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.073703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.073815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.073910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.074013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.074120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.074223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.074328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.074442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.074547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.074653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.074763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.074868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.074972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.075074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.075185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.075277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.075385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.075499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.075605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.075707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.075808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.075910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.076020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.076131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.076237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.076345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.076472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.076590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.076697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.076807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.076921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.077027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.077350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.077468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.077583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.077695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.077802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.077916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.078027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.078137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.078245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.078356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.078458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.078567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.078669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.078779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.078883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.078985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.079088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.079191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.079297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.079402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.079523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.079627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.079738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.079834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.079938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.080044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.080153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.080263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.080366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.080483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.080586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.080696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.080799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.080914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.479 [2024-07-24 20:01:47.081018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.081132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.081247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.081353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.081487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.081593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.081716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.081820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.081927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.082034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.082143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.082253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.082359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.083486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.083597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.083702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.083795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.083898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.084005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.084109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.084218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.084328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.084441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.084547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.084652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.084757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.084869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.084977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.085084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.085194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.085304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.085407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.085534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.085647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.085754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.085862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.085968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.086075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.086185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.086290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.086399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.086516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.086629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.086742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.086848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.086954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.087060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.087171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.087278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.087384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.087511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.087614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.087716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.087818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.087918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.088022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.088138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.088242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.088350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.088466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.088573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.088684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.088789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.088891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.088990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.089096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.089199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.089302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.089405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.089528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.089643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.089752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.089856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.089965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.090071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.090180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.090295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.090638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.090733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.090841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.090951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.091056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.091172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.091275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.091380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.091493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.091592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.091699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.091805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.091913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.092016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.480 [2024-07-24 20:01:47.092127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.092230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.092323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.092422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.092523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.092613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.092705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.092796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.092888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.092981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.093087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.093190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.093291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.093402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.093528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.093639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.093752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.093861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.093975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:43.481 [2024-07-24 20:01:47.094888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.094996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.095107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.095217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.095325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.095446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.095553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.095663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.095771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.095875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.095978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.096085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.096193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.096301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.096408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.096528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.096638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.096739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.096845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.096937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.097046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.097149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.097257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.097360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.097486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.097594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.097710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.097813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.097923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.098029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.098144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.098248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.098358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.098486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.098597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.098711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.098822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.098927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.099044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.099151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.099265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.099375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.099493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.099600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.099711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.099806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.099924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.100026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.100135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.100251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.100356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.100470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.100577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.100680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.100786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.100896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.101006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.101113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.101210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.101321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.101421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.101534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.101638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.101739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.102063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.102169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.102276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.102388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.102503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.102630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.102742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.481 [2024-07-24 20:01:47.102860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.102967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.103079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.103187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.103293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.103402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.103526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.103635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.103751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.103861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.103968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.104080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.104185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.104290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.104384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.104517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.104625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.104728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.104839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.104943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.105049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.105151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.105255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.105365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.105481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.105584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.105691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.105784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.105888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.105990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.106095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.106201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.106305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.106408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.106522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.106636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.106747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.106855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.106962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.107070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.108095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.108210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.108330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.108446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.108554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.108667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.108779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.108888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.108980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.109090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.109199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.109305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.109408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.109525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.109632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.109738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.109849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.109959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.110064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.110170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.110281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.110379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.110497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.110601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.110705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.110817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.110919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.111033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.111142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.111249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.111358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.111475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.111585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.111695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.111810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.111916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.112027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.112134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.112248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.112351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.112472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.112591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.112700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.112805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.112913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.113027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.113141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.113250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.113359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.113478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.113586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.113683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.113785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.113893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.114005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.114110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.114221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.114322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.482 [2024-07-24 20:01:47.114439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.114546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.114648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.114748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.114857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.114959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.115288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.115386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.115509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.115625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.115738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.115845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.115958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.116067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.116172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.116287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.116391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.116508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.116616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.116715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.116819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.116921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.117025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.117134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.117233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.117332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.117446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.117555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.117657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.117760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.117868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.117977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.118078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.118189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.118290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.118403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.118523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.118637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.118748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.119651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.119757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.119881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.119992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.120106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.120214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.120322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.120442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.120553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.120665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.120766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.120876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.120991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.121086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.121192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.121302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.121406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.121527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.121635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.121731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.121836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.121938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.122043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.122143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.122247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.122350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.122462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.122566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.122679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.122789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.122904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.123015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.123125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.123233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.123340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.123469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.123579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.123687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.123795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.123903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.124013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.124122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.124230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.124338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.124456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.124562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.124672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.124782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.124889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.125001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.125099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.125209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.125314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.125418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.125544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.125650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.125753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.125852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.483 [2024-07-24 20:01:47.125957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.126063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.126177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.126280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.126385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.126496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.126820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.126917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.127022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.127125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.127241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.127347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.127466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.127576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.127691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.127798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.127912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.128023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.128132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.128242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.128353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.128476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.128584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.128694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.128810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.128916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.129014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.129117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.129223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.129330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.129443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.129542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.129648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.129758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.129868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.129969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.130067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.130179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.130281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.130383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.130500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.130601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.130714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.130814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.130925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.131033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.131141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.131245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.131350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.131463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.131578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.131690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.131800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.132825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.132928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.133039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.133141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.133248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.133357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.133475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.133582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.133698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.133799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.133903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.134006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.134110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.134221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.134325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.134440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.134557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.134661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.134764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.134871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.134974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.135077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.135185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.135297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.135398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.135515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.135617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.135729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.135830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.135931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.136032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.136135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.136240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.136344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.136459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.136567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.136676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.136785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.136897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.137010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.137116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.137232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.137338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.137459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.137567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.484 [2024-07-24 20:01:47.137676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.137789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.137893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.138005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.138101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.138206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.138311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.138423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.138542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.138645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.138751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.138854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.138959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.139072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.139176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.139276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.139387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.139498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.139604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.139934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.140033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.140146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.140251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.140365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.140485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.140592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.140702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.140809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.140919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.141030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.141140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.141254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.141359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.141486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.141601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.141709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.141822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.141932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.142040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.142149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.142256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.142362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.142483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.142598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.142707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.142818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.142924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.143027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.143128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.143229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.143338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.143456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.144326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.144441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.144537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.144642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.144742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.144847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.144958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.145058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.145163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.145266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.145374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.145490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.145597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.145708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.145812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.145916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.146024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.146131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.146243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.146355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.146477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.146584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.146698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.146811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.146927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.147037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.147147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.147249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.147355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.147468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.147574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.485 [2024-07-24 20:01:47.147684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.147786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.147880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.147986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.148094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.148198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.148305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.148399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.148514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.148623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.148723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.148833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.148936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.149044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.149144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.149260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.149365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.149490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.149602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.149708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.149818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.149928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.150036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.150145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.150252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.150364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.150490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.150597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.150714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.150820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.150929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.151040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.151158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.151489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.151601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.151717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.151827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.151934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.152052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.152157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.152249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.152358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.152470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.152578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.152683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.152790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.152898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.152997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.153100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.153209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.153315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.153418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.153540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.153632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.153739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.153841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.153946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.154056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.154165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.154272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.154370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.154489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.154593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.155521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.155634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.155749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.155856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.155969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.156081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.156184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.156295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.156399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.156520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.156623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.156728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.156835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.156944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.157044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.157149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.157256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.157360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.157480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.157583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.157688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.157792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.157898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.158007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.158107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.158212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.158313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.158423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.158559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.158673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.158784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.158892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.159014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.159121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.159226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.486 [2024-07-24 20:01:47.159348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.159476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.159578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.159690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.159800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.159906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.160017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.160121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.160224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.160333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.160453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.160563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.160669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.160779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.160895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.161007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.161118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.161221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.161329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.161449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.161564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.161664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.161774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.161881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.161989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.162093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.162200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.162300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.162410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.162739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.162838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.162950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.163058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.163159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.163262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.163369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.163494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.163603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.163709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.163808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.163915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.164023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.164132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.164242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.164347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.164469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.164580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.164688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.164802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.164906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.165013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.165123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.165232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.165345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.165465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.165576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.165670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.165774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.165889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.165997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.166098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.166203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.166314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.166415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.166533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.166638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.166738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.166843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.166950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.167051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.167161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.167259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.167362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.167482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.167598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.167705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.167813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.167919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.168033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.168137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.168246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.168356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.168478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.168585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.168693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.168801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.168908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.169012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.169119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.169224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.169331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.169460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.170624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.170728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.170839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.170945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.171050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.171153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.487 [2024-07-24 20:01:47.171264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.171365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.171485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.171597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.171700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.171808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.171922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.172012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.172118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.172215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.172318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.172437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.172538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.172640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.172746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.172849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.172956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.173066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.173174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.173280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.173388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.173517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.173632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.173745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.173851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.173955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.174072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.174182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.174294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.174404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.174531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.174639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.174733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.174838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.174941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.175051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.175158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.175270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.175370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.175485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.175595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.175698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.175804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.175900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.176003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.176112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.176211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.176316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.176417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.176531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.176632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.176743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.176851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.176959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.177065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.177181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.177292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.177406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.177736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.177850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.177962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.178069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.178178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.178286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.178396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.178516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.178624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.178745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.178862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.178971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.179077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.179186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.179295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.179407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.179523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.180151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.180252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.180362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.180475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.180583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.180685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.180794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.180904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.181011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.181117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.181219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.181328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.181438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.181544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.181649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.181752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.181864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.181974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.182078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.182188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.182297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.182405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.182524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.488 [2024-07-24 20:01:47.182631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.182739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.182848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.182957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.183064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.183173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.183280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.183395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.183514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.183618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.183725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.183836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.183930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.184036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.184135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.184242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.184349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.184478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.184578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.184681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.184788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.184899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.185004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.185105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.185208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.185311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.185421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.185538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.185641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.185741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.185849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.185959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.186068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.186181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.186295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.186401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.186523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.186637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.186745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.186852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.186962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.187292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.187395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.187518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.187623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.187732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.187844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.187953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.188058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.188168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.188277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.188386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.188514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.188624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.188727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.188834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.188931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.189037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.189142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.189257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.189373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.189508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.189603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.189708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.189813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.189921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.190025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.190136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.190242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.190344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.190462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.190565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.190679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.190785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.190895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.190997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.191099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.191199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.191304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.191409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.191542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.191655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.191761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.191875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.191984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.192093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.192201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.193304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.193412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.193529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.193638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.193729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.193841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.193952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.194058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.194161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.194261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.194367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.489 [2024-07-24 20:01:47.194483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.194593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.194698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.194808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.194906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.195005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.195115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.195224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.195329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.195451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.195557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.195665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.195790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.195903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.196017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.196128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.196241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.196359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.196482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.196589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.196709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.196816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.196924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.197031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.197142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.197248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.197361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.197486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.197592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.197709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.197823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.197926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.198037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.198131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.198238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.198348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.198466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.198568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.198674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.198778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.198883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.198991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.199101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.199210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.199320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.199440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.199541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.199647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.199748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.199863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.199962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.200065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.200170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.200502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.200611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.200727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.200843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.200954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.201066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.201183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.201292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.201402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.201521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.201639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.201747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.201856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.201963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.202073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.202183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.202280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.202384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.202507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.202618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.202726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.202828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.202925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.203038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.203147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.203257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.203366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.203483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.203583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.203694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.203796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.203902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.490 [2024-07-24 20:01:47.204004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.204106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.205022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.205129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.205244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.205354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.205474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.205586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.205696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.205800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.205912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.206022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.206135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.206249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.206361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.206486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.206593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.206702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.206810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.206914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.207007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.207114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.207225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.207333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.207465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.207574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.207684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.207786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.207893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.207996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.208106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.208209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.208310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.208411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.208533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.208637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.208741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.208847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.208949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.209053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.209164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.209267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.209375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.209491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.209604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.209712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.209822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.209935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.210040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.210148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.210261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.210371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.210500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.210616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.210729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.210840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.210952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.211056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.211167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.211274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.211368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.211488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.211594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.211710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.211814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.211916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.212239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.212343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.212465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.212576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.212675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.212780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.212881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.212991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 Message suppressed 999 times: [2024-07-24 20:01:47.213095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 Read completed with error (sct=0, sc=15) 00:07:43.491 [2024-07-24 20:01:47.213198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.213310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.213420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.213539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.213648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.213760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.213875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.213976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.214081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.214182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.214288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.214392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.214514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.214617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.214716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.214832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.214941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.215052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.215166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.215281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.215388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.215512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.491 [2024-07-24 20:01:47.215620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.215729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.215842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.215951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.216063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.216168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.216275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.216382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.216504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.216613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.216724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.216836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.216937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.217040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.217143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.217846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.217953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.218052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.218155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.218260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.218369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.218479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.218583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.218705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.218810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.218919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.219033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.219143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.219257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.219369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.219492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.219601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.219714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.219825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.219927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.220036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.220153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.220270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.220381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.220502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.220618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.220728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.220839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.220946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.221053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.221163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.221269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.221370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.221488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.221593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.221699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.221803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.221895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.222000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.222106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.222211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.222319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.222424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.222532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.222637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.222739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.222846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.222948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.223049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.223154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.223258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.223367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.223485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.223594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.223704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.223812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.223918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.224028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.224143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.224250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.224355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.224473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.224588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.224704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.225025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.225132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.225242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.225356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.225483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.225575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.225679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.225782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.225889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.225997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.226101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.226210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.226312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.226418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.226538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.226643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.226749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.227934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.492 [2024-07-24 20:01:47.228045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.228154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.228267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.228375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.228500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.228609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.228719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.228826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.228930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.229050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.229155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.229272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.229381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.229504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.229615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.229726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.229835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.229944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.230052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.230163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.230268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.230366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.230489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.230590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.230690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.230797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.230903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.231006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.231102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.231207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.231318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.231425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.231538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.231647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.231741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.231843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.231946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.232049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.232149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.232255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.232360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.232473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.232588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.232703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.232818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.232922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.233035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.233139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.233255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.233363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.233486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.233598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.233704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.233812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.233919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.234029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.234136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.234241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.234354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.234476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.234582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.234697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.234800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.235134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.235231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.235339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.235458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.235562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.235658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.235763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.235874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.235977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.236085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.236190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.236282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.236402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.236515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.236627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.236727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.236831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.236934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.237039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.237141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.237244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.237357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.237488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.237588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.237685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.237794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.237919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.238027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.238135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.238234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.238328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.238444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.238560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.238664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.238784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.493 [2024-07-24 20:01:47.238887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.494 [2024-07-24 20:01:47.238997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.494 [2024-07-24 20:01:47.239105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.494 [2024-07-24 20:01:47.239215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.494 [2024-07-24 20:01:47.239335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.494 [2024-07-24 20:01:47.239449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.494 [2024-07-24 20:01:47.239549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.494 [2024-07-24 20:01:47.239654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.494 [2024-07-24 20:01:47.239778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.494 [2024-07-24 20:01:47.239891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.494 [2024-07-24 20:01:47.240000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.494 [2024-07-24 20:01:47.241060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.494 [2024-07-24 20:01:47.241178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.494 [2024-07-24 20:01:47.241288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.494 [2024-07-24 20:01:47.241387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.494 [2024-07-24 20:01:47.241503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.494 [2024-07-24 20:01:47.241630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.494 [2024-07-24 20:01:47.241744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.241858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.241967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.242083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.242197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.242308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.242414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.242529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.242636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.242740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.242861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.242963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.243074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.243191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.243311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.243409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.243524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.243659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.243759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.243863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.243979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.244074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.244193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.244292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.244396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.244531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.244638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.244757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.244858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.244958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.245072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.245180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.245288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.245407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.245521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.245630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.768 [2024-07-24 20:01:47.245745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.245851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.245972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.246078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.246185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.246287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.246405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.246514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.246630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.246732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.246835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.246939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.247057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.247167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.247277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.247387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.247517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.247623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.247730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.247836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.247942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.248046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.248370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.248480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.248588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.248705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.248806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.248912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.249020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.249136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.249246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.249362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.249489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.249597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.249707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.249816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.249914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.250021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.250135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.250245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.250353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.250474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.250581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.250693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.250803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.250919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.251029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.251135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.251246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.251355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.251485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.251607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.251721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.251834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.251946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.252061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.252979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.253080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.253193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.253295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.253405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.253521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.253631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.253731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.253838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.253946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.254051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.254153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.254259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.254359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.254484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.254588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.254696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.254804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.254913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.255026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.255131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.255248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.255356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.255485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.255589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.255702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.255816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.255926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.256033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.256139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.256247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.256356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.256476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.256581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.256694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.256794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.256895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.256998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.257110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.257217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.257316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.769 [2024-07-24 20:01:47.257420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.257541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.257648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.257756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.257860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.257962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.258070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.258176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.258280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.258389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.258517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.258629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.258740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.258854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.258965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.259100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.259206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.259309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.259439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.259559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.259668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.259783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.259886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.260221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.260324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.260443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.260554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.260663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.260779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.260891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.260998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.261110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.261216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.261320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.261421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.261540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.261643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.261752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.261862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.261963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.262067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.262176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.262289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.262398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.262514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.262626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.262730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.262836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.262934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.263039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.263146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.263247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.263349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.263477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.263579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.263686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.263806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.263917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.264028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.264138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.264248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.264363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.264483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.264591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.264699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.264808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.264917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.265028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.265137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.265857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.265959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.266060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.266171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.266279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.266387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.266495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.266599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.266707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.266811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.266916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.267020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.267132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.267249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.267366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.267494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.267602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.267711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.267820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.267928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.268043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.268157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.268262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.268370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.268496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.268601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.268716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.268827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.770 [2024-07-24 20:01:47.268942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.269050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.269156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.269270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.269383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.269511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.269615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.269721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.269830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.269938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.270047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.270156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.270271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.270383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.270504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.270616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.270722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.270830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.270931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.271034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.271144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.271248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.271350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.271468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.271578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.271687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.271793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.271899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.272007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.272114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.272224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.272338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.272462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.272568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.272683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.272790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.273101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.273202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.273306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.273415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.273534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.273634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.273740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.273843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.273953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.274060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.274163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.274267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.274372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.274488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.274591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.274694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.274801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.275745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.275854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.275971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.276081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.276204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.276316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.276423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.276548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.276659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.276770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.276878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.276984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.277089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.277199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.277321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.277443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.277550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.277651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.277761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.277870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.277964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.278079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.278189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.278296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.278400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.278518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.278625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.278736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.278839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.278943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.279045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.279146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.279251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.279356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.279474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.279584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.279692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.279807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.279922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.280027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.280149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.280255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.280372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.280511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.280616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.771 [2024-07-24 20:01:47.280727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.280849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.280966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.281069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.281180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.281297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.281401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.281517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.281628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.281757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.281859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.281966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.282064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.282173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.282277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.282398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.282515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.282636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.283115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.283218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.283329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.283440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.283554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.283659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.283762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.283875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.283975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.284076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.284189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.284297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.284411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.284533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.284645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.284754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.284861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.284967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.285078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.285189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.285295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.285407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.285525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.285636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.285742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.285846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.285958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.286068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.286177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.286291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.286398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.286507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.286618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.286723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.286831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.286940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.287052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.287157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.287252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.287364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.287485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.287597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.287701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.287807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.287911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.288018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.288122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.288224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.288331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.288441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.288543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.288647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.288764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.288868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.288983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.289092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.289199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.289313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.289422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.289545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.289653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.289758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.289864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.289977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.291141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.291248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.291358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.291473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.291587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.291698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.291801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.291911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.292019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.292129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.292233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.292341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.292457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.292562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.292668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.772 [2024-07-24 20:01:47.292780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.292888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.292994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.293091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.293201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.293311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.293418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.293550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.293658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.293777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.293886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.294000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.294110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.294224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.294332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.294454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.294564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.294670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.294781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.294888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.294997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.295106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.295209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.295316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.295425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.295541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.295654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.295747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.295855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.295959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.296072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.296179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.296281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.296388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.296504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.296616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.296723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.296815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.296918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.297020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.297126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.297227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.297328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.297446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.297548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.297662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.297772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.297886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.297993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.298312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.298420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.298541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.298653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.298753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.298857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.298957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.299059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.299171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.299267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.299381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.299502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.299610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.299716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.299822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.299928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.300706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.300817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.300926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.301041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.773 [2024-07-24 20:01:47.301150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.301261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.301370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.301500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.301615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.301730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.301845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.301955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.302064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.302176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.302285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.302398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.302536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.302651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.302757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.302869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.302976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.303082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.303189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.303298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.303399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.303524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.303628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.303744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.303853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.303960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.304068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.304173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.304274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.304380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.304502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.304601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.304705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.304813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.304921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.305028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.305134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.305245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.305349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.305471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.305582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.305694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.305803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.305919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.306030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.306145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.306249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.306365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.306481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.306594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.306705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.306822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.306933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.307045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.307152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.307253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.307363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.307478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.307587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.307695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.308006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.308108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.308215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.308324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.308445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.308553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.308650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.308759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.308864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.308968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.309078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.309184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.309295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.309402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.309525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.309639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.309748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.309854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.309962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.310064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.310163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.310271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.310371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.310485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.310601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.310702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.310803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.310915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.311035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.311140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.311246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.311356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.311485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.311598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.311707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.311817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.774 [2024-07-24 20:01:47.311924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.312034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.312146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.312255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.312363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.312494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.312595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.312712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.312821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.312935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.313048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.313796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.313902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.314009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.314107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.314218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.314323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.314424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.314548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.314646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.314757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.314864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.314964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.315073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.315175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.315285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.315395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.315517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.315628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.315736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.315848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.315961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.316066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.316174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.316286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.316394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.316515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.316621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.316734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.316850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.316962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.317071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.317183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.317289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.317396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.317525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.317632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.317727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.317834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.317947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.318053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.318156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.318269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.318376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.318486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.318598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.318710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.318817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.318931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.319037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.319131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.319233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.319337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.319449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.319553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.319660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.319765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.319878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.319984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.320095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.320203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.320311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.320418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.320541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.320654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.320974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.321077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.321191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.321300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.321410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.321544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.321654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.321764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.321876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.321989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.322093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.322197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.322305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.322396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.322515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.322628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.323545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.323647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.323756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.323865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.323977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.324079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.324183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.775 [2024-07-24 20:01:47.324286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.324398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.324517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.324628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.324735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.324845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.324955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.325066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.325175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.325286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.325393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.325504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.325623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.325732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.325840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.325943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.326045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.326149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.326253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.326360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.326471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.326577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.326683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.326785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.326888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.326996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.327098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.327204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.327311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.327417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.327544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.327652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.327762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.327872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.327986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.328099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.328212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.328319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.328445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.328555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.328663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.328773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.328886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.328992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.329097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.329210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.329320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.329446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.329553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.329661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.329758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.329865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.329967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.330077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.330182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.330283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.330395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.330727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.330837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.330945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.331049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.331154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.331254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.331354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.331486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.331593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.331706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.331814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.331920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.332028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.332137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.332246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.332360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.332489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.332602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.332711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.332818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.332925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.333037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.333148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.333259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.333372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.333497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.333608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.333720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.333822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.333928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.334030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.334142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.334251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.334361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.334488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.334582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.334684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.334788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.334894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.335002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.335103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.776 [2024-07-24 20:01:47.335208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.335315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.335417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.335532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.335635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.335740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:43.777 [2024-07-24 20:01:47.336861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.336977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.337090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.337205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.337317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.337442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.337566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.337673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.337802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.337906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.338010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.338120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.338230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.338339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.338463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.338571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.338674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.338782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.338883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.338996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.339105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.339211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.339317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.339421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.339532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.339637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.339745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.339854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.339963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.340073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.340182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.340292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.340403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.340524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.340636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.340742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.340859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.340968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.341079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.341188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.341302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.341414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.341543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.341652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.341772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.341881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.341988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.342095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.342203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.342313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.342440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.342550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.342669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.342770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.342874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.342975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.343087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.343195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.343297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.343407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.343515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.343627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.343735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.343837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.344148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.344248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.344355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.344477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.344582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.344686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.777 [2024-07-24 20:01:47.344790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.778 [2024-07-24 20:01:47.344894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.778 [2024-07-24 20:01:47.344997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.778 [2024-07-24 20:01:47.345106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.778 [2024-07-24 20:01:47.345220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.778 [2024-07-24 20:01:47.345334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.778 [2024-07-24 20:01:47.345455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.778 [2024-07-24 20:01:47.345564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.778 [2024-07-24 20:01:47.345673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.778 [2024-07-24 20:01:47.345785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.778 [2024-07-24 20:01:47.345896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.778 [2024-07-24 20:01:47.346004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.778 [2024-07-24 20:01:47.346113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.778 [2024-07-24 20:01:47.346223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.778 [2024-07-24 20:01:47.346338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.778 [2024-07-24 20:01:47.346461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.346569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.346678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.346788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.346890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.346996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.347097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.347209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.347314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.347419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.347527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.347637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.348658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.348760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.348869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.348977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.349095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.349202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.349314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.349425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.349552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.349658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.349766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.349885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.349998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.350111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.350219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.350327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.350446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.350565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.350673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.350789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.350905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.351015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.351123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.351235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.351339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.351453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.351557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.351661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.351773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.351876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.351983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.352091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.352192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.352305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.352414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.352532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.352638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.352735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.352837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.352943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.353048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.353154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.353256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.353358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.353469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.353582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.353693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.353801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.353912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.354019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.354128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.354234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.354341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.354465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.354577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.354690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.354794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.354903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.355015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.355125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.355232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.355351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.355472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.355593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.355906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.356003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.356112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.356218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.356321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.356443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.356535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.356642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.356745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.356856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.356964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.357072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.357165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.357272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.357380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.357497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.357604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.357707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.357814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.357918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.358026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.358134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.779 [2024-07-24 20:01:47.358249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.358358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.358481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.358595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.358704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.358816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.358929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.359036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.359951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.360061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.360167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.360287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.360390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.360502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.360609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.360718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.360824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.360932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.361038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.361138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.361245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.361349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.361466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.361571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.361683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.361786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.361889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.361993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.362095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.362207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.362308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.362410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.362522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.362629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.362744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.362856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.362964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.363071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.363180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.363292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.363399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.363521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.363629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.363745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.363850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.363965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.364071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.364178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.364298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.364406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.364531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.364640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.364749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.364860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.364963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.365074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.365174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.365286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.365392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.365506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.365617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.365719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.365829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.365938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.366042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.366154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.366264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.366364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.366492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.366607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.366711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.366811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.367137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.367237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.367348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.367467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.367587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.367701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.367813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.367922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.368041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.368149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.368255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.368378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.368494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.368612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.368720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.368836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.368943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.369054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.369160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.369277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.369386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.369505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.369612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.369714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.369820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.369927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.780 [2024-07-24 20:01:47.370036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.370147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.370252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.370347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.370465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.370573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.370678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.370785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.370891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.370990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.371097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.371199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.371300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.371405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.371525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.371624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.371730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.371835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.371946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.372056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.372164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.372272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.372380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.372513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.372624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.372737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.372851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.372959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.373069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.373180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.373287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.373393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.373510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.373631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.373733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.373840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.373949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.375124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.375230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.375340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.375462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.375573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.375665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.375774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.375876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.375980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.376082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.376184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.376287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.376388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.376512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.376622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.376728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.376844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.376948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.377059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.377170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.377277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.377384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.377504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.377607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.377716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.377823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.377941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.378045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.378154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.378275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.378386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.378508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.378616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.378739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.378830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.378933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.379036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.379149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.379269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.379378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.379491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.379602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.379711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.379819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.379923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.380028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.380136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.380249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.380352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.380472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.380578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.380680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.380788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.380890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.380992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.381102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.381213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.381320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.381442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.381549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.381659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.781 [2024-07-24 20:01:47.381781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.381893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.381997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.382366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.382483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.382597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.382705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.382812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.382921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.383025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.383136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.383251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.383343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.383459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.383564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.383672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.383775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.383882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.383989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.384093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.384195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.384303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.384408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.384524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.384634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.384727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.384826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.384934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.385038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.385136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.385249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.385355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.385471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.385592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.385697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.385806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.385918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.386023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.386130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.386238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.386346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.386469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.386581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.386689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.386807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.386923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.387032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.387142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.387257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.387366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.387486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.387597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.387709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.387814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.387912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.388021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.388127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.388238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.388346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.388462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.388561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.388669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.388772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.388883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.388986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.389097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.389981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.390083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.390190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.390306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.390415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.390540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.390656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.390770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.390880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.390990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.391098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.391206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.391316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.391424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.391548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.391653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.391768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.391877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.391990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.392110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.392222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.392334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.392453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.392562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.392672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.392776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.392889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.392998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.393109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.393217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.393313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.393422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.782 [2024-07-24 20:01:47.393536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.393650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.393757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.393857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.393955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.394057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.394165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.394265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.394368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.394489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.394603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.394715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.394825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.394937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.395049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.395161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.395269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.395383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.395505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.395619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.395729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.395841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.395950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.396062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.396172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.396288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.396400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.396526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.396642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.396750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.396861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.396961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.398144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.398254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.398364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.398473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.398575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.398687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.398791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.398898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.399005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.399108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.399202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.399307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.399411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.399530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.399641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.399748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.399860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.399965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.400073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.400183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.400291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.400397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.400523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.400630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.400744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.400848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.400950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.401052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.401159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.401259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.401364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.401483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.401592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.401695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.401806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.401911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.402012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.402121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.402228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.402346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.402471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.402578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.402689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.402800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.402908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.403013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.403123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.403233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.403349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.783 [2024-07-24 20:01:47.403471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.403582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.403691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.403801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.403913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.404018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.404127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.404243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.404356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.404480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.404595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.404700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.404810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.404926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.405284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.405391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.405509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.405614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.405730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.405829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.405931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.406035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.406141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.406243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.406352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.406466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.406571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.406679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.406785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.406897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.407009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.407122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.407220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.407326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.407441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.407540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.407647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.407749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.407851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.407956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.408060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.408164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.408268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.408384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.408516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.408630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.408742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.408845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.408955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.409066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.409177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.409294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.409402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.409520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.409626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.409731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.409837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.409948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.410042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.410146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.410250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.410360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.410476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.410581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.410688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.410794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.410898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.410998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.411107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.411211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.411326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.411444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.411559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.411666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.411779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.411890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.412004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.412116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.413281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.413389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.413510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.413626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.413734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.413841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.413952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.414056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.414169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.414275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.414377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.414494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.414587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.414691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.414800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.414910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.415011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.415119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.415229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.415336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.784 [2024-07-24 20:01:47.415449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.415565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.415673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.415785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.415893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.416001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.416111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.416218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.416328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.416445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.416558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.416664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.416770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.416877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.416985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.417094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.417199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.417315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.417446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.417555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.417660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.417766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.417873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.417988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.418096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.418207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.418312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.418418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.418546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.418641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.418745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.418849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.418953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.419059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.419157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.419257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.419362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.419501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.419606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.419710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.419815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.419921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.420030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.420138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.420476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.420583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.420693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.420805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.420912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.421022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.421129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.421246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.421354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.421477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.421588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.421687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.421790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.421893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.421999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.422105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.422742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.422844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.422951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.423064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.423168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.423270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.423372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.423490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.423595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.423707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.423816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.423928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.424038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.424151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.424258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.424368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.424488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.424603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.424711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.424827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.424937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.425048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.425159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.425271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.425386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.425507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.425616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.425722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.425830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.425938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.426050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.426153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.426260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.426363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.426481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.426592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.426695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.426803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.785 [2024-07-24 20:01:47.426894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.426995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.427109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.427212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.427314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.427409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.427526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.427633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.427734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.427840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.427944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.428046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.428161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.428263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.428370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.428487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.428595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.428707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.428812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.428926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.429035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.429141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.429251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.429353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.429482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.429584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.429921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.430025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.430141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.430250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.430364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.430465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.430569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.430675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.430786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.430893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.431002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.431109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.431214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.431327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.431441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.431544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.431651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.431761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.431856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.431965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.432068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.432173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.432274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.432377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.432496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.432602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.432717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.432826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.432937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.433045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.433153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.433261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.433366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.433491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.433599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.433705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.433808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.433915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.434025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.434126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.434230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.434356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.434479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.434588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.434699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.434809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.434919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.436163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.436272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.436381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.436497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.436608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.436713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.436816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.436918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.437021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.437122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.437222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.437341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.437465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.437572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.437682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.437790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.437896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.438006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.438107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.438216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.438321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.438424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.438540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.438645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.786 [2024-07-24 20:01:47.438745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.438847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.438956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.439062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.439171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.439279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.439389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.439510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.439619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.439728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.439837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.439949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.440056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.440166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.440274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.440382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.440509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.440613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.440730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.440835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.440949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.441059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.441168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.441274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.441373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.441494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.441602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.441704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.441808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.441912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.442016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.442124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.442229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.442324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.442426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.442539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.442644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.442750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.442851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.442950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.443265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.443374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.443495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.443601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.443709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.443822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.443928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.444035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.444144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.444251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.444362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.444479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.444588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.444699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.444809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.444925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.445684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.445783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.445889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.445989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.446088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.446190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.446291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.446398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.446509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 true 00:07:43.787 [2024-07-24 20:01:47.446613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.446712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.446800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.446900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.447000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.447099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.447196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.447296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.447395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.447507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.447613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.447722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.447824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.447929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.448037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.448141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.448240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.448344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.448463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.787 [2024-07-24 20:01:47.448571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.448687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.448801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.448904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.449003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.449108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.449204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.449306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.449405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.449540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.449641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.449741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.449841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.449942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.450052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.450152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.450254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.450364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.450469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.450570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.450668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.450766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.450866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.450966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.451062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.451162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.451266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.451372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.451481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.451580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.451678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.451774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.451874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.451981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.452103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.452215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.452536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.452640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.452747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.452853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.452958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.453063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.453164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:43.788 [2024-07-24 20:01:47.453270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.453372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.453487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.453589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.453689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.453792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.453896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.454002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.454102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.454202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.454297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.454399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.454512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.454614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.454715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.454804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.454906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.455005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.455108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.455211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.455318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.455416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.455529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.455634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.455735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.455839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.455943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.456040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.456139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.456241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.456348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.456466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.456573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.456683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.456790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.456896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.457000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.457104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.457209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.457323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.457420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.457540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.457642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.457746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.457849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.457953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.458059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.458165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.458271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.458375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.458485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.458584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.458688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.458795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.788 [2024-07-24 20:01:47.458894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.458999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.460193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.460291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.460395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.460509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.460609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.460711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.460823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.460931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.461031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.461135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.461245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.461342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.461460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.461566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.461667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.461774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.461875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.461982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.462088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.462197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.462296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.462401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.462525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.462629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.462736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.462840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.462941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.463044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.463143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.463240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.463345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.463455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.463553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.463664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.463751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.463857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.463954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.464057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.464157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.464256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.464354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.464467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.464564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.464661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.464768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.464866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.464968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.465077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.465183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.465309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.465419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 20:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1938526 00:07:43.789 [2024-07-24 20:01:47.465560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.465666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 20:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.789 [2024-07-24 20:01:47.465799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.465906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.466009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.466116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.466224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.466330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.466451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.466565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.466684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.466808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.466925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.467742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.467844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.467948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.468061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.468164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.468276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.468373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.468505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.468613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.468734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.468829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.468934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.469040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.469147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.469252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.469355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.469480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.469575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.469691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.469793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.469899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.470012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.470114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.470230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.470334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.470475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.470578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.470703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.470810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.789 [2024-07-24 20:01:47.470915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.471018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.471128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.471241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.471334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.471454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.471574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.471690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.471790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.471892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.471998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.472107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.472207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.472309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.472412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.472544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.472650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.472775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.472883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.472997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.473107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.473224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.473335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.473472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.473587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.473715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.473824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.473945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.474057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.474170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.474281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.474408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.474549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.474650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.474781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.475108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.475223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.475336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.475477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.475583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.475705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.475820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.475929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.476041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.476144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.476251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.476358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.476486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.476582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.476706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.476817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.477480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.477578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.477690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.477808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.477908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.478020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.478125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.478227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.478344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.478474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.478576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.478700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.478799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.478903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.479009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.479121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.479228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.479342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.479482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.479587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.479716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.479825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.479946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.480054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.480167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.480270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.480389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.480524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.480622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.480739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.480831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.480935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.481046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.481154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.481254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.481356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.481484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.481581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.481695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.481803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.481915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.482020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.482132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.482243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.482349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.482506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.482607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.790 [2024-07-24 20:01:47.482727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.482836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.482944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.483057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.483167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.483272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.483378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.483521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.483627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.483752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.483861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.483968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.484077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.484183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.484291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.484396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.484536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.484879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.484982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.485093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.485203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.485313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.485417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.485558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.485657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.485782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.485881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.485985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.486090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.486191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.486297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.486399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.486516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.486630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.486742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.486865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.486980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.487095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.487199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.487309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.487416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.487539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.487642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.487756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.487865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.487976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.488081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.488187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.488293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.488391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.488513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.488620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.488723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.488834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.488935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.489034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.489152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.489260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.489366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.489490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.489601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.489716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.489826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.489934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.491105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.491218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.491334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.491456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.491563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.491681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.491807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.491916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.492029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.492143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.492258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.492372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.492493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.492608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.492712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.492822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.492929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.493024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.493130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.493233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.493345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.493462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.493574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.493676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.493793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.493904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.494012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.494118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.494220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.494325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.494425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.494551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.494657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.494760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.791 [2024-07-24 20:01:47.494864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.494970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.495071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.495188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.495299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.495401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.495519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.495630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.495746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.495857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.495968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.496080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.496192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.496302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.496410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.496536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.496649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.496747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.496873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.496972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.497080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.497180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.497285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.497397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.497522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.497616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.497722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.497827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.497929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.498037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.498367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.498484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.498597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.498705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.498818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.498924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.499038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.499143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.499251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.499360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.499489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.499598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.499712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.499831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.499946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.500058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.500167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.500272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.500382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.500510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.500619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.500725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.500834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.500943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.501043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.501157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.501261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.501366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.501501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.501606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.501707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.501813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.501918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.502912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.503024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.503141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.503257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.503367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.503499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.503620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.503734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.503848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.503957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.504069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.504175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.504286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.504394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.504514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.504622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.504742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.504851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.504961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.505071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.505185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.505287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.792 [2024-07-24 20:01:47.505393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.505508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.505619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.505720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.505827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.505929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.506026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.506132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.506239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.506344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.506462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.506565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.506667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.506784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.506894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.506994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.507095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.507205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.507310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.507419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.507539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.507647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.507756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.507866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.507976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.508086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.508195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.508305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.508412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.508548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.508655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.508763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.508878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.508992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.509089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.509202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.509313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.509422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.509541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.509649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.509752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.509853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.510176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.510268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.510370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.510490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.510597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.510701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.510809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.510919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.511026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.511134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.511242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.511355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.511494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.511603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.511708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.511808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.511927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.512033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.512137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.512249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.512349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.512463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.512574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.512680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.512795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.512907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.513019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.513127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.513243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.513360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.514245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.514352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.514475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.514588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.514699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.514811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.514914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.515024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.515129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.515237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.515336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.515451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.515559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.515666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.515768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.515876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.515976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.516092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.516195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.516303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.516409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.516527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.516634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.516739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.516850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.793 [2024-07-24 20:01:47.516958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.517064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.517169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.517280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.517386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.517517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.517631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.517746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.517856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.517965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.518079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.518192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.518304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.518407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.518538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.518646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.518754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.518863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.518983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.519088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.519189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.519293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.519404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.519525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.519631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.519741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.519840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.519947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.520054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.520159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.520265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.520373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.520493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.520596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.520703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.520807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.520915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.521030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.521132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.521467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.521579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.521688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.521796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.521907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.522017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.522128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.522238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.522351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.522472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.522584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.522690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.522806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.522919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.523025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.523138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.523245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.523354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.523478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.523591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.523689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.523795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.523896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.524000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.524110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.524222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.524328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.524420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.524537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.524646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.524746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.524851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.524956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.525049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.525152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.525258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.525365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.525488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.525591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.525698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.525799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.525912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.526023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.526131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.526238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.526354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.526472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.526579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.526689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.526800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.526909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.527020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.527129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.527237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.527346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.527465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.527576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.527687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.527791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.527899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.794 [2024-07-24 20:01:47.528007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.528119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.528229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.529396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.529512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.529624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.529730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.529831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.529940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.530043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.530148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.530252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.530359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.530472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.530578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.530686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.530798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.530918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.531030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.531145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.531246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.531352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.531465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.531575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.531680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.531778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.531884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.531993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.532093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.532198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.532309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.532414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.532541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.532653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.532768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.532877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.532982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.533093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.533203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.533311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.533443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.533542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.533646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.533751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.533875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.533985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.534097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.534198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.534307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.534414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.534552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.534655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.534772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.534879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.534988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.535093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.535185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.535298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.535401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.535525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.535621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.535724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.535830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.535945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.536045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.536146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.536250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.536666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.536790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.536902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.537008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.537123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.537231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.537360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.537464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.537567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.537667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.537769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:43.795 [2024-07-24 20:01:47.537891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.537996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.538101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.538211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.538316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.538418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.538536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.538647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.538748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.538850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.538957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.539058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.539167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.539265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.539360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.539508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.539607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.539726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.539828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.539926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.540035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.540149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.540247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.540364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.540480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.540595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.540694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.540809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.540925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.541028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.541149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.541267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.541361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.541492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.541614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.541722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.541842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.541954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.542067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.542174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.542296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.542404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.542527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.542645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.542752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.542858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.542966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.543077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.543188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.543298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.543408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.071 [2024-07-24 20:01:47.543538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.544701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.544808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.544912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.545025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.545134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.545239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.545354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.545481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.545581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.545690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.545791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.545895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.546002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.546114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.546218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.546318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.546424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.546527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.546640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.546753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.546857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.546969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.547076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.547189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.547300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.547411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.547527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.547641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.547756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.547864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.547975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.548089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.548196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.548308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.548412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.548528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.548630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.548741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.548842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.548946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.549049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.549157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.549268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.549374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.549494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.549593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.549696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.549801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.549908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.550012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.550116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.550227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.550332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.550457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.550563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.550674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.550782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.550892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.550999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.551109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.551219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.551324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.551442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.551552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.551889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.551992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.552103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.552208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.552316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.552423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.552544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.552652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.552762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.552874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.552987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.553093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.553195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.553303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.553404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.553522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.553632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.554276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.554378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.554497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.554619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.554726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.072 [2024-07-24 20:01:47.554827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.554936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.555035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.555142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.555241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.555349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.555463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.555565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.555668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.555776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.555884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.555992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.556103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.556222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.556335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.556455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.556566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.556677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.556783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.556898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.557003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.557114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.557222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.557335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.557455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.557550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.557660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.557763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.557872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.557978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.558082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.558182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.558294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.558398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.558519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.558618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.558720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.558829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.558935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.559038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.559145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.559245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.559347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.559472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.559580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.559685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.559797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.559907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.560016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.560126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.560232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.560344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.560459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.560571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.560680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.560785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.560895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.561005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.561112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.561442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.561549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.561661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.561767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.561879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.561987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.562090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.562191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.562297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.562400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.562516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.562627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.562733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.562830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.562936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.563040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.563147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.563254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.563374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.563487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.563597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.563695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.563801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.563908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.564011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.564119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.564222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.564326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.564444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.564555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.564667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.564769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.564876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.564984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.565097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.565210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.565321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.565448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.565557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.565669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.565780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.565887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.566000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.566106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.566219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.566323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.567212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.567312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.567424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.567546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.567646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.567764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.567856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.567958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.568062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.568169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.568271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.568371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.073 [2024-07-24 20:01:47.568485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.568597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.568707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.568821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.568927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.569039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.569148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.569260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.569372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.569491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.569600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.569711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.569822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.569927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.570035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.570145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.570258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.570367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.570492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.570603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.570714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.570829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.570928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.571038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.571141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.571246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.571348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.571467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.571573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.571677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.571779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.571876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.571989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.572094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.572203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.572301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.572405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.572524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.572629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.572731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.572836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.572937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.573046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.573153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.573265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.573379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.573496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.573610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.573713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.573829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.573946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.574056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.574345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.574455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.574565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.574672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.574780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.574886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.574991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.575097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.575196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.575301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.575406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.575520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.575623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.575727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.575829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:44.074 [2024-07-24 20:01:47.575933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.576037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.576962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.577070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.577177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.577283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.577397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.577519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.577626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.577740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.577861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.577970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.578079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.578187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.578298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.578407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.578527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.578635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.578743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.578848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.578956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.579048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.579151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.579261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.579368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.579482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.579597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.579694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.579802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.579912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.580012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.580114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.580225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.580328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.580451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.580559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.580667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.580779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.580886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.580994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.581102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.581211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.581319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.581445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.581561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.074 [2024-07-24 20:01:47.581675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.581791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.581902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.582004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.582134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.582237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.582348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.582482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.582588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.582696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.582796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.582923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.583021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.583119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.583215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.583317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.583450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.583561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.583655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.583756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.584692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.584794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.584901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.585001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.585112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.585221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.585327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.585451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.585561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.585668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.585779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.585897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.586008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.586118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.586223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.586330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.586454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.586561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.586672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.586781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.586891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.587007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.587115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.587225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.587335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.587452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.587561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.587662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.587767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.587872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.587981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.588083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.588187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.588290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.588393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.588509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.588612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.588723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.588815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.588922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.589027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.589136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.589241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.589351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.589466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.589563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.589670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.589778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.589887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.589998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.590105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.590215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.590327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.590449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.590558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.590668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.590772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.590885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.590988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.591095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.591197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.591306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.591407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.591535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.591919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.592016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.592129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.592231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.592334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.592453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.592559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.592666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.592775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.592888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.592998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.593100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.593210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.593316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.075 [2024-07-24 20:01:47.593420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.593530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.593646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.593755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.593867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.593980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.594092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.594208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.594320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.594439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.594550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.594657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.594769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.594876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.594988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.595108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.595216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.595321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.595438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.595547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.595653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.595754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.595850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.595971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.596079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.596183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.596285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.596391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.596504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.596612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.596713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.596819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.596927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.597035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.597148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.597254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.597360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.597477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.597584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.597691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.597795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.597903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.598017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.598119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.598227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.598332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.598455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.598562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.598678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.599851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.599947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.600055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.600168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.600271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.600376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.600504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.600608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.600719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.600826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.600927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.601031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.601133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.601235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.601337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.601448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.601561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.601670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.601778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.601887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.602007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.602118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.602226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.602340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.602463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.602568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.602679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.602789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.602905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.603014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.603135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.603244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.603356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.603480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.603588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.603692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.603806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.603914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.604020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.604121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.604230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.604334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.604452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.604544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.604644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.604752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.604859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.604972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.605088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.605181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.605300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.605402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.605516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.605627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.605728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.605833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.605933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.606040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.606147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.076 [2024-07-24 20:01:47.606255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.606364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.606481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.606590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.606696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.607020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.607125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.607233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.607344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.607470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.607578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.607687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.607801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.607909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.608021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.608135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.608247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.608342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.608475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.608577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.608683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.608796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.609419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.609539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.609647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.609743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.609849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.609954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.610060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.610169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.610268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.610371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.610485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.610592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.610704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.610813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.610922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.611028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.611139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.611254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.611361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.611483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.611596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.611703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.611812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.611919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.612031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.612138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.612251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.612358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.612476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.612586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.612693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.612800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.612914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.613025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.613126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.613237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.613337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.613456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.613560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.613659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.613761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.613868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.613973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.614076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.614192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.614294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.614402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.614519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.614624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.614735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.614833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.614939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.615039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.615140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.615248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.615356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.615475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.615582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.615689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.615801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.615912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.616018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.616127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.616233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.616573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.616688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.616801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.616909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.617014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.617127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.617241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.617348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.617463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.617568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.617668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.617781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.617888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.618002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.618109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.618207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.618323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.618440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.618548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.618656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.618769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.618871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.077 [2024-07-24 20:01:47.618976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.619084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.619190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.619289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.619397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.619508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.619613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.619713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.619825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.619930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.620037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.620151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.620259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.620371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.620495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.620606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.620713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.620824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.620931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.621037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.621142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.621259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.621367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.621494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.622615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.622716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.622826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.622924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.623031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.623137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.623247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.623352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.623470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.623563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.623667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.623776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.623882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.623982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.624086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.624185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.624289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.624396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.624519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.624622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.624737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.624847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.624957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.625065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.625181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.625285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.625397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.625521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.625635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.625747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.625850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.625969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.626070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.626178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.626284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.626397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.626518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.626624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.626724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.626833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.626934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.627042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.627146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.627249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.627360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.627469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.627592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.627699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.627808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.627917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.628019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.628112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.628228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.628334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.628454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.628558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.628658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.628765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.628871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.628988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.629096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.629202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.629314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.629425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.629750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.629857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.629965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.630073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.630181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.630290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.630399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.630520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.630638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.630750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.630858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.630965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.631077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.631186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.631282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.631387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.631499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.631608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.631714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.631827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.631937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.632030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.632141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.632244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.632354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.632472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.632574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.632674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.632775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.632877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.078 [2024-07-24 20:01:47.632983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.633090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.633196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.633300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.634224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.634331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.634453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.634562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.634674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.634789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.634899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.635010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.635120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.635228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.635341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.635461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.635568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.635678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.635790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.635895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.636001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.636107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.636218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.636323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.636465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.636568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.636663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.636772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.636881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.636986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.637092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.637205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.637303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.637419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.637534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.637636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.637742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.637845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.637949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.638052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.638162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.638274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.638387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.638509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.638620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.638729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.638836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.638944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.639056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.639161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.639269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.639374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.639490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.639598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.639706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.639816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.639923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.640041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.640154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.640259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.640363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.640472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.640586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.640685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.640792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.640898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.641013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.641124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.641461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.641561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.641674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.641779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.641874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.641986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.642091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.642194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.642301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.642406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.642525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.642634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.642744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.642853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.642962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.643067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.643179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.643290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.643395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.643510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.643606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.643711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.643818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.643922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.644030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.644141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.644246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.644369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.644482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.644591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.644707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.644822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.644938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.645047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.645156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.079 [2024-07-24 20:01:47.645266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.645379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.645499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.645611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.645720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.645829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.645936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.646047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.646156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.646267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.646377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.647125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.647228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.647329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.647451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.647561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.647665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.647772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.647865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.647968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.648083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.648192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.648294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.648399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.648524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.648635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.648747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.648853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.648963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.649066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.649174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.649282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.649389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.649511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.649616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.649726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.649842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.649949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.650059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.650168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.650282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.650385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.650510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.650614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.650720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.650831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.650926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.651046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.651144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.651256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.651359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.651481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.651582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.651687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.651798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.651906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.652012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.652116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.652218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.652312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.652414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.652528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.652637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.652757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.652863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.652961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.653064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.653174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.653282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.653391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.653512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.653622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.653734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.653847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.653955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.654283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.654394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.654517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.654629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.654742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.654848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.654957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.655068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.655180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.655290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.655402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.655507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.655621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.655730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.655834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.655947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.656051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.657000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.657099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.657205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.657309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.657412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.657525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.657627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.657732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.657842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.657949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.080 [2024-07-24 20:01:47.658060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.658163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.658276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.658387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.658514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.658616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.658732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.658837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.658943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.659053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.659165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.659273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.659383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.659508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.659619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.659729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.659844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.659953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.660061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.660162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.660268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.660371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.660484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.660594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.660702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.660801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.660897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.661001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.661111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.661222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.661326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.661446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.661550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.661652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.661757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.661857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.661966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.662080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.662178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.662281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.662389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.662504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.662613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.662721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.662829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.662934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.663040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.663143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.663250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.663367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.663484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.663599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.663708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.663820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.664145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.664250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.664361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.664480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.664588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.664686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.664796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.664896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.665004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.665109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.665226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.665332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.665442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.665554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.665664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.665770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.665874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.665987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.666077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.666192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.666296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.666404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.666516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.666622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.666725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.666832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.666943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.667052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.667172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.667276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.667388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.667505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.667613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.667720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.667823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.667931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.668039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.668144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.668252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.668357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.668488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.668595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.668708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.668817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.668927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.669036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.669874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.669978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.670084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.670191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.670302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.670405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.670533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.670645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.670753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.670864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.670980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.671079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.671182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.671289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.671405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.671516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.671624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.671730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.081 [2024-07-24 20:01:47.671837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.671944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.672066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.672173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.672284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.672400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.672525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.672633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.672744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.672856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.672969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.673093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.673200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.673313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.673419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.673538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.673646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.673755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.673861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.673970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.674076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.674186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.674291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.674394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.674513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.674604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.674710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.674817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.674922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.675034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.675146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.675239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.675348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.675465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.675572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.675675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.675778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.675879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.675984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.676097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.676208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.676320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.676438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.676549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.676657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.676768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.677088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.677194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.677300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.677408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.677528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.677635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.677742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.677853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.677965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.678072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.678181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.678292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.678390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.678506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.678605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.678719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.678835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.679759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.679849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.679957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.680066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.680167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.680274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.680383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.680493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.680608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.680718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.680825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.680934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.681045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.681152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.681259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.681364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.681487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.681594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.681705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.681810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.681918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.682030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.682143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.682256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.682362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.682491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.682594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.682710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.682812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.682923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.683030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.683123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.683237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.683345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.683461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.683570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.683687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.683797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.683911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.684010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.684119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.684218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.684321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.684438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.684544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.684654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.684755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.684853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.684955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.685071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.082 [2024-07-24 20:01:47.685181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.685284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.685393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.685513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.685618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.685725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.685833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.685946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.686052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.686152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.686256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.686356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.686474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.686582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.686904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.686999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.687107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.687209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.687311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.687413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.687528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.687630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.687736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.687845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.687954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.688060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.688168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.688277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.688385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.688502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.688605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.688706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.688810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.688914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.689023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.689129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.689243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.689351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.689468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.689581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.689693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.689802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.689909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.690018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.690129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.690237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.690345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.690465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.690575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.690681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.690790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.690894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.690999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.691107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.691201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.691305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.691411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.691529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.691632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.691736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:44.083 [2024-07-24 20:01:47.692886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.692990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.693102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.693207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.693316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.693423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.693547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.693659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.693765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.693878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.693989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.694105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.694211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.694323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.694444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.694550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.694650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.694752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.694861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.694978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.695092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.695195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.695291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.695396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.695504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.695617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.695721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.695830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.695930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.696036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.696142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.696245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.083 [2024-07-24 20:01:47.696347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.696465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.696569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.696671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.696778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.696887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.696995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.697101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.697209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.697321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.697425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.697545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.697655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.697759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.697869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.697979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.698094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.698200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.698307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.698418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.698541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.698652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.698763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.698873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.698982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.699086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.699184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.699290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.699401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.699521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.699629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.699731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.700076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.700177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.700285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.700387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.700505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.700597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.700709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.700814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.700914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.701015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.701141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.701240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.701344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.701465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.701571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.701679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.701793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.701906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.702010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.702110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.702212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.702327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.702447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.702544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.702647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.702751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.702856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.702965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.703073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.703179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.703288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.703397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.703518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.703625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.704327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.704446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.704558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.704669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.704777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.704887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.704993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.705103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.705212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.705312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.705416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.705536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.705640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.705745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.705844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.705948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.706055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.706160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.706270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.706379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.706502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.706607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.706711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.706809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.706921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.707029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.707136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.707246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.707352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.707474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.707580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.707687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.707795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.707901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.708012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.708119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.708228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.708340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.708458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.084 [2024-07-24 20:01:47.708564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.708671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.708776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.708873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.708979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.709087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.709197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.709301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.709399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.709532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.709638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.709742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.709843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.709954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.710074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.710174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.710276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.710378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.710505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.710611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.710720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.710829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.710936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.711043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.711151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.711480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.711586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.711691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.711804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.711909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.712016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.712130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.712225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.712341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.712460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.712562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.712665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.712775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.712891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.712995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.713101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.713204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.713316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.713414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.713534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.713639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.713748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.713849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.713951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.714052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.714165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.714277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.714390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.714510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.714636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.715712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.715823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.715937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.716049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.716160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.716274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.716381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.716503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.716611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.716720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.716824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.716933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.717038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.717138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.717246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.717359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.717487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.717596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.717702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.717800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.717908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.718014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.718120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.718227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.718333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.718450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.718560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.718665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.718772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.718882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.718994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.719109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.719213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.719326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.719447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.719555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.719665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.719770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.719879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.719976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.720079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.720191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.720295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.720404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.720509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.720617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.720722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.720831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.720935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.721039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.721149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.721254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.721358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.721478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.721584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.721695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.085 [2024-07-24 20:01:47.721804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.721917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.722024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.722132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.722237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.722346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.722466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.722574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.722894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.722997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.723104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.723212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.723320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.723426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.723546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.723658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.723772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.723882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.723983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.724085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.724195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.724299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.724398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.724507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.724613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.724718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.724820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.724932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.725034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.725137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.725246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.725346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.725463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.725568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.725676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.725782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.725883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.725992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.726101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.726212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.726320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.726439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.726552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.726660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.726770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.726876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.726986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.727095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.727204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.727312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.727423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.727545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.727639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.727747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.727855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.727965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.728068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.728161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.728263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.728373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.728488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.728590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.728692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.728793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.728896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.729003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.729115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.729223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.729324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.729444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.729556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.730753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.730857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.730963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.731068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.731179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.731283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.731391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.731520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.731624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.731740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.731839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.731949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.732043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.732147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.732251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.732360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.732479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.732587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.732694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.732798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.732901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.733011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.733118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.733221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.733326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.733443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.733548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.733653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.733764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.733868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.733977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.734083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.734192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.734302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.734410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.734534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.734645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.734758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.086 [2024-07-24 20:01:47.734869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.734974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.735085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.735194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.735305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.735410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.735533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.735645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.735754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.735861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.735974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.736083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.736192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.736285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.736400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.736519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.736630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.736730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.736845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.736955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.737056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.737161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.737267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.737377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.737496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.737602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.737927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.738034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.738138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.738242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.738342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.738461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.738574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.738687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.738794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.738909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.739019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.739125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.739230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.739339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.739460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.739572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.739687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.740332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.740446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.740565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.740671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.740783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.740880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.740988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.741093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.741198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.741304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.741412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.741535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.741630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.741746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.741853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.741956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.742065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.742176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.742268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.742370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.742488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.742591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.742694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.742805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.742909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.743028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.743136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.743242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.743348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.743467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.743579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.743689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.743804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.743915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.744021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.744130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.744239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.744347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.744469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.744583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.744690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.744802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.744911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.745020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.745134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.745240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.745348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.745466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.745569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.745672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.745781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.745890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.745996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.746104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.746196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.746298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.746403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.087 [2024-07-24 20:01:47.746528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.746630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.746731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.746841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.746947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.747050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.747154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.747479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.747584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.747699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.747809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.747920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.748028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.748136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.748243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.748352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.748475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.748573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.748689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.748790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.748899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.749002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.749108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.749220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.749327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.749426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.749546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.749651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.749757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.749860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.749965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.750067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.750167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.750279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.750386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.750506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.750614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.750724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.750842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.750949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.751055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.751163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.751273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.751387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.751508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.751623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.751730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.751845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.751956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.752071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.752189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.752298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.752411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.753227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.753337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.753462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.753568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.753672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.753782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.753889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.753988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.754096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.754204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.754305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.754408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.754529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.754640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.754751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.754860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.754972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.755079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.755188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.755293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.755406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.755523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.755638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.755749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.755858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.755970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.756078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.756189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.756298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.756404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.756521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.756629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.756741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.756844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.756957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.757056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.757166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.757276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.757382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.757496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.757609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.757717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.757823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.757927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.758029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.758135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.758235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.758333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.758453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.758555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.758659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.758760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.758861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.758965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.759078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.759191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.759301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.759409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.759529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.759641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.759756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.759866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.759981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.760094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.760422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.760546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.760663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.760779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.760872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.760986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.761091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.088 [2024-07-24 20:01:47.761199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.761313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.761415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.761531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.761640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.761742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.761852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.761955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.762053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.762158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.763241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.763351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.763474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.763585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.763698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.763813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.763924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.764041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.764148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.764262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.764373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.764504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.764609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.764714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.764830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.764939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.765051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.765158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.765279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.765388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.765508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.765615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.765724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.765832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.765939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.766041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.766147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.766252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.766359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.766485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.766592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.766685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.766790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.766894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.767004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.767106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.767213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.767319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.767441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.767551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.767657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.767769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.767879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.767987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.768097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.768209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.768311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.768420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.768546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.768657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.768770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.768887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.769001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.769112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.769220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.769330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.769450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.769563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.769669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.769765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.769872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.769979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.770090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.770200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.770518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.770626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.770736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.770842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.770951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.771059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.771151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.771252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.771360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.771478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.771582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.771699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.771804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.771907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.772016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.772120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.772227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.772332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.772466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.772567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.772678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.772784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.772892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.772996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.773097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.773208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.773318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.773424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.773550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.773660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.773768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.773881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.773987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.774097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.774215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.774319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.774442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.774553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.774660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.774781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.774884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.774992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.089 [2024-07-24 20:01:47.775099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.775205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.775307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.775414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.776217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.776317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.776425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.776542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.776645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.776753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.776864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.776980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.777093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.777204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.777320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.777438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.777552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.777667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.777782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.777896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.778006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.778112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.778228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.778336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.778462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.778576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.778687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.778792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.778900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.779015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.779108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.779216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.779316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.779421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.779539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.779648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.779751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.779856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.779968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.780077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.780203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.780310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.780411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.780530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.780635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.780746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.780854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.780958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.781070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.781177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.781281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.781400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.781521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.781638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.781753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.781857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.781970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.782078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.782187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.782294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.782405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.782529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.782647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.782756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.782866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.782979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.783090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.783203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.783535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.783643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.783742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.783845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.783958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.784063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.784163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.784274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.784389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.784521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.784631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.784736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.784840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.784963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.785071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.785169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.785277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.786262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.786368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.786497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.786608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.786716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.786828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.786939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.787057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.787168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.787277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.787382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.787500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.787610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.787716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.787826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.787937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.788046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.788156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.788269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.788378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.788498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.788606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.788711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.788803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.788920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.789031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.789141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.789248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.789356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.789483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.789591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.789697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.090 [2024-07-24 20:01:47.789811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.091 [2024-07-24 20:01:47.789909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.091 [2024-07-24 20:01:47.790014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.091 [2024-07-24 20:01:47.790116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.091 [2024-07-24 20:01:47.790224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.091 [2024-07-24 20:01:47.790325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.091 [2024-07-24 20:01:47.790424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.091 [2024-07-24 20:01:47.790540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.091 [2024-07-24 20:01:47.790646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.091 [2024-07-24 20:01:47.790762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.091 [2024-07-24 20:01:47.790874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.091 [2024-07-24 20:01:47.790981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.091 [2024-07-24 20:01:47.791088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.091 [2024-07-24 20:01:47.791203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:44.091 [2024-07-24 20:01:47.791314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:45.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.063 20:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.579 20:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:45.579 20:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:46.145 true 00:07:46.145 20:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1938526 00:07:46.145 20:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.710 20:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.967 20:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:46.967 20:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:47.224 true 00:07:47.224 20:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1938526 00:07:47.224 20:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.156 20:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.414 20:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:48.414 20:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:48.671 true 00:07:48.671 20:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1938526 00:07:48.671 20:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.234 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.234 20:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.234 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.757 20:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:49.757 20:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:50.068 true 00:07:50.068 20:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1938526 00:07:50.068 20:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.440 20:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.696 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.696 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.696 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.696 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.696 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.954 20:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:51.954 20:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:52.212 true 00:07:52.212 20:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1938526 00:07:52.212 20:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.144 20:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.144 Initializing NVMe Controllers 00:07:53.144 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:53.144 Controller IO queue size 128, less than required. 00:07:53.144 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:53.144 Controller IO queue size 128, less than required. 00:07:53.144 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:53.144 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:53.144 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:53.144 Initialization complete. Launching workers. 00:07:53.144 ======================================================== 00:07:53.144 Latency(us) 00:07:53.144 Device Information : IOPS MiB/s Average min max 00:07:53.144 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3875.73 1.89 26850.07 3380.83 1126540.15 00:07:53.144 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12083.72 5.90 10592.84 4306.42 618760.31 00:07:53.144 ======================================================== 00:07:53.144 Total : 15959.45 7.79 14540.89 3380.83 1126540.15 00:07:53.144 00:07:53.402 20:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:53.402 20:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:53.969 true 00:07:53.969 20:01:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1938526 00:07:53.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1938526) - No such process 00:07:53.969 20:01:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1938526 00:07:53.969 20:01:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.226 20:01:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.792 20:01:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:54.792 20:01:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:54.792 20:01:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:54.792 20:01:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:54.792 20:01:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:55.364 null0 00:07:55.364 20:01:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:55.364 20:01:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:55.364 20:01:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:55.929 null1 00:07:55.929 20:01:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:55.929 20:01:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:55.929 20:01:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:56.187 null2 00:07:56.187 20:01:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:56.187 20:01:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:56.187 20:01:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:56.446 null3 00:07:56.446 20:02:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:56.446 20:02:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:56.446 20:02:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:57.012 null4 00:07:57.012 20:02:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:57.012 20:02:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:57.012 20:02:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:57.575 null5 00:07:57.575 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:57.575 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:57.575 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:57.833 null6 00:07:57.833 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:57.833 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:57.833 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:58.091 null7 00:07:58.091 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:58.091 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:58.091 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1942695 1942696 1942698 1942700 1942702 1942704 1942706 1942708 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.092 20:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:58.660 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:58.660 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:58.660 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:58.660 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.660 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:58.660 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:58.660 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:58.661 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:58.918 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.918 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.918 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:58.918 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.918 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.918 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:58.918 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.918 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.918 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:58.918 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.918 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.918 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:58.918 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.918 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.918 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:58.919 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.919 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.919 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:58.919 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.919 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.919 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:58.919 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.919 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.919 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:59.179 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.180 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:59.180 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:59.180 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:59.180 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:59.180 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:59.180 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:59.180 20:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:59.467 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.467 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.468 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:59.468 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.468 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.468 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:59.468 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.468 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.468 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:59.468 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.468 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.468 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:59.468 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.468 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.468 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:59.468 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.468 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.468 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:59.468 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.468 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.468 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:59.468 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.468 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.468 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:59.749 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:59.749 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.749 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:59.749 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:59.749 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:59.749 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:59.749 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:00.008 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:00.008 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.008 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.008 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:00.008 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.008 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.008 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:00.008 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.008 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.008 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:00.265 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.265 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.265 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:00.265 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.265 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.265 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:00.265 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.265 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.265 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:00.265 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.265 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.265 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:00.265 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.265 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.266 20:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:00.524 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:00.524 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.524 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:00.524 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:00.524 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:00.524 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:00.524 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:00.524 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:00.783 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.783 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.783 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:00.783 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.783 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.783 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:00.783 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.783 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.783 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:00.783 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.783 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.783 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:00.783 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.783 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.783 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:00.783 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.783 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.783 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:00.783 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.783 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.783 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:00.783 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.783 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.783 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.042 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.042 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:01.042 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:01.042 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:01.042 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.042 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:01.042 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:01.300 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:01.300 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.300 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.300 20:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:01.300 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.300 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.300 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:01.300 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.300 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.300 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.300 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.300 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.300 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:01.300 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.300 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.300 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:01.300 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.300 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.300 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:01.559 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.559 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.559 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.559 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.559 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.559 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:01.559 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.559 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:01.818 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:01.818 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:01.818 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:01.818 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.818 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:01.818 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:02.077 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.077 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.077 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:02.077 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.077 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.077 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:02.077 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.077 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.077 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:02.077 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.077 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.077 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:02.077 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.077 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.077 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:02.077 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.077 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.077 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:02.077 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.077 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.077 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:02.335 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.335 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.335 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:02.335 20:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.335 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.335 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:02.335 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:02.335 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.335 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:02.592 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.592 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:02.592 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.592 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.592 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:02.592 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.592 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.592 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:02.592 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.592 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.592 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:02.592 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.592 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.592 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:02.592 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.592 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.593 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:02.593 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.593 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.593 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:02.851 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.851 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.851 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:02.851 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.851 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.851 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:02.851 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.851 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.110 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.110 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.110 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.110 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.110 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.110 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.368 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.368 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.368 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.368 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.368 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.368 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.368 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.368 20:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.368 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.368 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.368 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.368 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.368 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.368 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.368 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.368 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.368 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.368 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.627 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.627 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.627 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.627 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.627 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.627 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.627 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.627 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.627 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.627 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.627 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.885 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.885 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.885 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.885 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.885 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.885 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.885 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.885 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.885 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.885 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.885 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.885 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.885 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.885 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.885 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.144 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.144 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.144 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.144 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.144 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.144 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.144 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.144 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.144 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.144 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.144 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.144 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.144 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.144 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.403 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.403 20:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.403 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.403 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.403 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.403 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.403 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.403 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.661 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.661 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.661 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.661 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.661 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.661 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.661 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.661 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.661 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.661 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:04.920 rmmod nvme_tcp 00:08:04.920 rmmod nvme_fabrics 00:08:04.920 rmmod nvme_keyring 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1937921 ']' 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1937921 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1937921 ']' 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1937921 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1937921 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1937921' 00:08:04.920 killing process with pid 1937921 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1937921 00:08:04.920 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1937921 00:08:05.487 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:05.487 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:05.487 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:05.487 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:05.487 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:05.487 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.487 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.487 20:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.389 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:07.389 00:08:07.389 real 0m52.928s 00:08:07.389 user 4m0.255s 00:08:07.389 sys 0m19.107s 00:08:07.389 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.389 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:07.389 ************************************ 00:08:07.389 END TEST nvmf_ns_hotplug_stress 00:08:07.389 ************************************ 00:08:07.389 20:02:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:07.389 20:02:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:07.389 20:02:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.389 20:02:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:07.389 ************************************ 00:08:07.389 START TEST nvmf_delete_subsystem 00:08:07.389 ************************************ 00:08:07.389 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:07.389 * Looking for test storage... 00:08:07.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:07.647 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.647 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:07.647 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.647 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.647 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.647 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.647 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.647 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.647 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.647 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.647 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.647 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.647 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:07.647 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:07.647 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.647 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.647 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.647 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.647 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.647 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.647 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.647 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:07.648 20:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:10.180 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:10.180 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.180 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:10.180 Found net devices under 0000:84:00.0: cvl_0_0 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:10.181 Found net devices under 0000:84:00.1: cvl_0_1 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:10.181 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:10.440 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:10.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:08:10.440 00:08:10.440 --- 10.0.0.2 ping statistics --- 00:08:10.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.440 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:08:10.440 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:10.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:08:10.440 00:08:10.440 --- 10.0.0.1 ping statistics --- 00:08:10.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.440 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:08:10.440 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.440 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:10.440 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:10.440 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.440 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:10.440 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:10.440 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.440 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:10.440 20:02:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:10.440 20:02:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:10.440 20:02:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:10.440 20:02:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:10.440 20:02:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.440 20:02:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1945730 00:08:10.440 20:02:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:10.440 20:02:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1945730 00:08:10.440 20:02:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1945730 ']' 00:08:10.440 20:02:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.440 20:02:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.440 20:02:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.440 20:02:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.440 20:02:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.440 [2024-07-24 20:02:14.068458] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:08:10.440 [2024-07-24 20:02:14.068561] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.440 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.440 [2024-07-24 20:02:14.155668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:10.699 [2024-07-24 20:02:14.336781] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.699 [2024-07-24 20:02:14.336896] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.699 [2024-07-24 20:02:14.336932] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.699 [2024-07-24 20:02:14.336963] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.699 [2024-07-24 20:02:14.336988] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.699 [2024-07-24 20:02:14.337115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.699 [2024-07-24 20:02:14.337123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.633 [2024-07-24 20:02:15.133642] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.633 [2024-07-24 20:02:15.150906] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.633 NULL1 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.633 Delay0 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.633 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.634 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1945880 00:08:11.634 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:11.634 20:02:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:11.634 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.634 [2024-07-24 20:02:15.254825] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:13.546 20:02:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:13.546 20:02:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.546 20:02:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 starting I/O failed: -6 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 starting I/O failed: -6 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 starting I/O failed: -6 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 starting I/O failed: -6 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 starting I/O failed: -6 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 starting I/O failed: -6 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 starting I/O failed: -6 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 starting I/O failed: -6 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 starting I/O failed: -6 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 starting I/O failed: -6 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 starting I/O failed: -6 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 [2024-07-24 20:02:17.412111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdea4000c00 is same with the state(5) to be set 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 starting I/O failed: -6 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 starting I/O failed: -6 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 starting I/O failed: -6 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 starting I/O failed: -6 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 starting I/O failed: -6 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Write completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 Read completed with error (sct=0, sc=8) 00:08:13.805 starting I/O failed: -6 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 Read completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 Write completed with error (sct=0, sc=8) 00:08:13.806 starting I/O failed: -6 00:08:13.806 starting I/O failed: -6 00:08:13.806 starting I/O failed: -6 00:08:13.806 starting I/O failed: -6 00:08:13.806 starting I/O failed: -6 00:08:14.741 [2024-07-24 20:02:18.358366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203dac0 is same with the state(5) to be set 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 [2024-07-24 20:02:18.413117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdea400d000 is same with the state(5) to be set 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 [2024-07-24 20:02:18.413367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdea400d660 is same with the state(5) to be set 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 Write completed with error (sct=0, sc=8) 00:08:14.741 [2024-07-24 20:02:18.413739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203c8f0 is same with the state(5) to be set 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.741 Read completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Write completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Write completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Write completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Write completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Write completed with error (sct=0, sc=8) 00:08:14.742 Write completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Write completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Write completed with error (sct=0, sc=8) 00:08:14.742 Write completed with error (sct=0, sc=8) 00:08:14.742 Write completed with error (sct=0, sc=8) 00:08:14.742 Write completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 Read completed with error (sct=0, sc=8) 00:08:14.742 [2024-07-24 20:02:18.414962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203c3e0 is same with the state(5) to be set 00:08:14.742 Initializing NVMe Controllers 00:08:14.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:14.742 Controller IO queue size 128, less than required. 00:08:14.742 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:14.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:14.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:14.742 Initialization complete. Launching workers. 00:08:14.742 ======================================================== 00:08:14.742 Latency(us) 00:08:14.742 Device Information : IOPS MiB/s Average min max 00:08:14.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.51 0.09 903997.42 882.40 1017512.85 00:08:14.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.73 0.08 903747.82 932.18 1017510.21 00:08:14.742 ======================================================== 00:08:14.742 Total : 354.24 0.17 903879.94 882.40 1017512.85 00:08:14.742 00:08:14.742 [2024-07-24 20:02:18.416074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203dac0 (9): Bad file descriptor 00:08:14.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:14.742 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.742 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:14.742 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1945880 00:08:14.742 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1945880 00:08:15.309 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1945880) - No such process 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1945880 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1945880 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1945880 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.309 [2024-07-24 20:02:18.939301] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1946297 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1946297 00:08:15.309 20:02:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:15.309 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.309 [2024-07-24 20:02:18.999782] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:15.875 20:02:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:15.875 20:02:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1946297 00:08:15.875 20:02:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:16.441 20:02:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:16.441 20:02:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1946297 00:08:16.441 20:02:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:16.699 20:02:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:16.699 20:02:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1946297 00:08:16.699 20:02:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:17.265 20:02:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:17.265 20:02:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1946297 00:08:17.265 20:02:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:17.831 20:02:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:17.831 20:02:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1946297 00:08:17.831 20:02:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:18.397 20:02:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:18.397 20:02:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1946297 00:08:18.397 20:02:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:18.655 Initializing NVMe Controllers 00:08:18.655 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:18.655 Controller IO queue size 128, less than required. 00:08:18.655 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:18.655 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:18.655 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:18.655 Initialization complete. Launching workers. 00:08:18.655 ======================================================== 00:08:18.655 Latency(us) 00:08:18.655 Device Information : IOPS MiB/s Average min max 00:08:18.655 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005041.01 1000223.16 1016821.06 00:08:18.655 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1007274.02 1000425.95 1042828.08 00:08:18.655 ======================================================== 00:08:18.655 Total : 256.00 0.12 1006157.52 1000223.16 1042828.08 00:08:18.655 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1946297 00:08:18.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1946297) - No such process 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1946297 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:18.914 rmmod nvme_tcp 00:08:18.914 rmmod nvme_fabrics 00:08:18.914 rmmod nvme_keyring 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1945730 ']' 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1945730 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1945730 ']' 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1945730 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1945730 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1945730' 00:08:18.914 killing process with pid 1945730 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1945730 00:08:18.914 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1945730 00:08:19.174 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:19.174 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:19.174 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:19.174 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:19.174 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:19.174 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.174 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.174 20:02:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.715 20:02:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:21.715 00:08:21.715 real 0m13.845s 00:08:21.715 user 0m29.985s 00:08:21.715 sys 0m3.584s 00:08:21.715 20:02:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.715 20:02:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.715 ************************************ 00:08:21.715 END TEST nvmf_delete_subsystem 00:08:21.715 ************************************ 00:08:21.715 20:02:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:21.715 20:02:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:21.715 20:02:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.715 20:02:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:21.715 ************************************ 00:08:21.715 START TEST nvmf_host_management 00:08:21.715 ************************************ 00:08:21.715 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:21.715 * Looking for test storage... 00:08:21.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:08:21.716 20:02:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:24.251 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:24.251 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:24.251 Found net devices under 0000:84:00.0: cvl_0_0 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:24.251 Found net devices under 0000:84:00.1: cvl_0_1 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:24.251 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:24.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:24.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:08:24.252 00:08:24.252 --- 10.0.0.2 ping statistics --- 00:08:24.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.252 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:24.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:24.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:08:24.252 00:08:24.252 --- 10.0.0.1 ping statistics --- 00:08:24.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.252 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1948775 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1948775 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1948775 ']' 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:24.252 20:02:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.252 [2024-07-24 20:02:27.949221] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:08:24.252 [2024-07-24 20:02:27.949351] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.252 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.511 [2024-07-24 20:02:28.059601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.511 [2024-07-24 20:02:28.204674] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.511 [2024-07-24 20:02:28.204744] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.511 [2024-07-24 20:02:28.204763] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.511 [2024-07-24 20:02:28.204780] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.511 [2024-07-24 20:02:28.204795] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.511 [2024-07-24 20:02:28.204895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.511 [2024-07-24 20:02:28.204989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.511 [2024-07-24 20:02:28.205082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:24.511 [2024-07-24 20:02:28.205089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.770 [2024-07-24 20:02:28.396841] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.770 Malloc0 00:08:24.770 [2024-07-24 20:02:28.467163] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1948827 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1948827 /var/tmp/bdevperf.sock 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1948827 ']' 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:24.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:24.770 { 00:08:24.770 "params": { 00:08:24.770 "name": "Nvme$subsystem", 00:08:24.770 "trtype": "$TEST_TRANSPORT", 00:08:24.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.770 "adrfam": "ipv4", 00:08:24.770 "trsvcid": "$NVMF_PORT", 00:08:24.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.770 "hdgst": ${hdgst:-false}, 00:08:24.770 "ddgst": ${ddgst:-false} 00:08:24.770 }, 00:08:24.770 "method": "bdev_nvme_attach_controller" 00:08:24.770 } 00:08:24.770 EOF 00:08:24.770 )") 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:24.770 20:02:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:24.770 "params": { 00:08:24.770 "name": "Nvme0", 00:08:24.770 "trtype": "tcp", 00:08:24.770 "traddr": "10.0.0.2", 00:08:24.770 "adrfam": "ipv4", 00:08:24.770 "trsvcid": "4420", 00:08:24.770 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:24.770 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:24.770 "hdgst": false, 00:08:24.770 "ddgst": false 00:08:24.770 }, 00:08:24.770 "method": "bdev_nvme_attach_controller" 00:08:24.770 }' 00:08:24.770 [2024-07-24 20:02:28.548051] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:08:24.770 [2024-07-24 20:02:28.548141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1948827 ] 00:08:25.029 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.029 [2024-07-24 20:02:28.623952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.029 [2024-07-24 20:02:28.762399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.596 Running I/O for 10 seconds... 00:08:25.596 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:25.596 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:25.596 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:25.596 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.596 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.596 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.596 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:25.596 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:25.596 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:25.596 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:25.596 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:25.596 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:25.596 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:25.596 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:25.596 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:25.596 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.596 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:25.596 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.596 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.596 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:25.596 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:25.596 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:25.857 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:25.857 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:25.857 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:25.857 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:25.857 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.857 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.857 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.857 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:08:25.857 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:08:25.857 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:25.857 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:25.857 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:25.857 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:25.857 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.857 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.857 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.857 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:25.857 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.857 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.857 [2024-07-24 20:02:29.576484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:25.857 [2024-07-24 20:02:29.576553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.576577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:25.857 [2024-07-24 20:02:29.576596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.576615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:25.857 [2024-07-24 20:02:29.576644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.576663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:25.857 [2024-07-24 20:02:29.576681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.576699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5540 is same with the state(5) to be set 00:08:25.857 [2024-07-24 20:02:29.577146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.577179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.577228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.577250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.577272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.577291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.577312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.577331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.577351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.577370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.577390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.577409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.577438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.577460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.577489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.577508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.577529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.577557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.577578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.577596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.577616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.577636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.577656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.577675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.577695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.577714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.577734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.577882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.577907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.577927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.577948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.577966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.577987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.857 [2024-07-24 20:02:29.578971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.857 [2024-07-24 20:02:29.578991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.858 [2024-07-24 20:02:29.579010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.858 [2024-07-24 20:02:29.579031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.858 [2024-07-24 20:02:29.579049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.858 [2024-07-24 20:02:29.579069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.858 [2024-07-24 20:02:29.579087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.858 [2024-07-24 20:02:29.579109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.858 [2024-07-24 20:02:29.579128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.858 [2024-07-24 20:02:29.579148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.858 [2024-07-24 20:02:29.579167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.858 [2024-07-24 20:02:29.579187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.858 [2024-07-24 20:02:29.579207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.858 [2024-07-24 20:02:29.579228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.858 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.858 [2024-07-24 20:02:29.579246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.858 [2024-07-24 20:02:29.579267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.858 [2024-07-24 20:02:29.579285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.858 [2024-07-24 20:02:29.579306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.858 [2024-07-24 20:02:29.579324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.858 [2024-07-24 20:02:29.579345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.858 [2024-07-24 20:02:29.579363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.858 [2024-07-24 20:02:29.579389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.858 [2024-07-24 20:02:29.579408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c 20:02:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:25.858 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.858 [2024-07-24 20:02:29.579439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.858 [2024-07-24 20:02:29.579461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.858 [2024-07-24 20:02:29.579489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.858 [2024-07-24 20:02:29.579507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.858 [2024-07-24 20:02:29.579527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.858 [2024-07-24 20:02:29.579546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.858 [2024-07-24 20:02:29.579566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.858 [2024-07-24 20:02:29.579585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.858 [2024-07-24 20:02:29.579605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.858 [2024-07-24 20:02:29.579623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.858 [2024-07-24 20:02:29.579643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.858 [2024-07-24 20:02:29.579662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.858 [2024-07-24 20:02:29.579682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.858 [2024-07-24 20:02:29.579700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.858 [2024-07-24 20:02:29.579720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.858 [2024-07-24 20:02:29.579738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.858 [2024-07-24 20:02:29.579759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.858 [2024-07-24 20:02:29.579778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.858 [2024-07-24 20:02:29.579799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.858 [2024-07-24 20:02:29.579817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.858 [2024-07-24 20:02:29.579837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.858 [2024-07-24 20:02:29.579855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.858 [2024-07-24 20:02:29.579881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.858 [2024-07-24 20:02:29.579900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.858 [2024-07-24 20:02:29.580018] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16c5d70 was disconnected and freed. reset controller. 00:08:25.858 [2024-07-24 20:02:29.581546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:25.858 task offset: 65536 on job bdev=Nvme0n1 fails 00:08:25.858 00:08:25.858 Latency(us) 00:08:25.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.858 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:25.858 Job: Nvme0n1 ended in about 0.44 seconds with error 00:08:25.858 Verification LBA range: start 0x0 length 0x400 00:08:25.858 Nvme0n1 : 0.44 1152.26 72.02 144.03 0.00 47721.90 4004.98 46020.84 00:08:25.858 =================================================================================================================== 00:08:25.858 Total : 1152.26 72.02 144.03 0.00 47721.90 4004.98 46020.84 00:08:25.858 [2024-07-24 20:02:29.584099] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:25.858 [2024-07-24 20:02:29.584138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b5540 (9): Bad file descriptor 00:08:26.117 [2024-07-24 20:02:29.676654] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:27.053 20:02:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1948827 00:08:27.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1948827) - No such process 00:08:27.053 20:02:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:27.053 20:02:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:27.053 20:02:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:27.053 20:02:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:27.053 20:02:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:27.053 20:02:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:27.053 20:02:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:27.053 20:02:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:27.053 { 00:08:27.053 "params": { 00:08:27.053 "name": "Nvme$subsystem", 00:08:27.053 "trtype": "$TEST_TRANSPORT", 00:08:27.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:27.053 "adrfam": "ipv4", 00:08:27.053 "trsvcid": "$NVMF_PORT", 00:08:27.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:27.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:27.053 "hdgst": ${hdgst:-false}, 00:08:27.053 "ddgst": ${ddgst:-false} 00:08:27.053 }, 00:08:27.053 "method": "bdev_nvme_attach_controller" 00:08:27.053 } 00:08:27.053 EOF 00:08:27.053 )") 00:08:27.053 20:02:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:27.053 20:02:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:27.053 20:02:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:27.053 20:02:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:27.053 "params": { 00:08:27.053 "name": "Nvme0", 00:08:27.053 "trtype": "tcp", 00:08:27.053 "traddr": "10.0.0.2", 00:08:27.053 "adrfam": "ipv4", 00:08:27.053 "trsvcid": "4420", 00:08:27.053 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:27.053 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:27.053 "hdgst": false, 00:08:27.053 "ddgst": false 00:08:27.053 }, 00:08:27.053 "method": "bdev_nvme_attach_controller" 00:08:27.053 }' 00:08:27.053 [2024-07-24 20:02:30.643055] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:08:27.053 [2024-07-24 20:02:30.643161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1949107 ] 00:08:27.053 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.053 [2024-07-24 20:02:30.725383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.312 [2024-07-24 20:02:30.868290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.571 Running I/O for 1 seconds... 00:08:28.504 00:08:28.504 Latency(us) 00:08:28.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.504 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:28.504 Verification LBA range: start 0x0 length 0x400 00:08:28.504 Nvme0n1 : 1.02 1214.93 75.93 0.00 0.00 51340.56 3422.44 45438.29 00:08:28.504 =================================================================================================================== 00:08:28.504 Total : 1214.93 75.93 0.00 0.00 51340.56 3422.44 45438.29 00:08:29.095 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:29.095 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:29.095 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:29.095 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:29.095 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:29.095 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:29.095 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:29.095 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:29.095 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:29.095 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:29.095 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:29.095 rmmod nvme_tcp 00:08:29.095 rmmod nvme_fabrics 00:08:29.095 rmmod nvme_keyring 00:08:29.095 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:29.095 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:29.095 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:29.095 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1948775 ']' 00:08:29.096 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1948775 00:08:29.096 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1948775 ']' 00:08:29.096 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1948775 00:08:29.096 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:29.096 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:29.096 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1948775 00:08:29.096 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:29.096 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:29.096 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1948775' 00:08:29.096 killing process with pid 1948775 00:08:29.096 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1948775 00:08:29.096 20:02:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1948775 00:08:29.363 [2024-07-24 20:02:33.027936] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:29.363 20:02:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:29.363 20:02:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:29.363 20:02:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:29.363 20:02:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:29.363 20:02:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:29.363 20:02:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.363 20:02:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.363 20:02:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:31.899 00:08:31.899 real 0m10.077s 00:08:31.899 user 0m23.074s 00:08:31.899 sys 0m3.403s 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:31.899 ************************************ 00:08:31.899 END TEST nvmf_host_management 00:08:31.899 ************************************ 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:31.899 ************************************ 00:08:31.899 START TEST nvmf_lvol 00:08:31.899 ************************************ 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:31.899 * Looking for test storage... 00:08:31.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.899 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:31.900 20:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:34.436 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:34.436 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:34.437 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:34.437 Found net devices under 0000:84:00.0: cvl_0_0 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:34.437 Found net devices under 0000:84:00.1: cvl_0_1 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:34.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:08:34.437 00:08:34.437 --- 10.0.0.2 ping statistics --- 00:08:34.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.437 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:34.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:08:34.437 00:08:34.437 --- 10.0.0.1 ping statistics --- 00:08:34.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.437 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1951331 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1951331 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1951331 ']' 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:34.437 20:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:34.437 [2024-07-24 20:02:37.999655] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:08:34.437 [2024-07-24 20:02:37.999761] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.437 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.437 [2024-07-24 20:02:38.099342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:34.696 [2024-07-24 20:02:38.301637] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.696 [2024-07-24 20:02:38.301710] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.696 [2024-07-24 20:02:38.301729] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.696 [2024-07-24 20:02:38.301745] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.696 [2024-07-24 20:02:38.301759] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.696 [2024-07-24 20:02:38.301831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.696 [2024-07-24 20:02:38.301893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.696 [2024-07-24 20:02:38.301897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.696 20:02:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:34.696 20:02:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:34.696 20:02:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:34.696 20:02:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:34.696 20:02:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:34.697 20:02:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:34.697 20:02:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:35.263 [2024-07-24 20:02:38.982200] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.263 20:02:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:35.830 20:02:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:35.830 20:02:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:36.396 20:02:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:36.396 20:02:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:36.962 20:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:37.221 20:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9ba85fbf-0980-4c56-bc45-8ec30899c0b6 00:08:37.221 20:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9ba85fbf-0980-4c56-bc45-8ec30899c0b6 lvol 20 00:08:37.479 20:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ddcaadb0-3ac0-46b2-9053-13c5f6f9b547 00:08:37.479 20:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:38.044 20:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ddcaadb0-3ac0-46b2-9053-13c5f6f9b547 00:08:38.302 20:02:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:38.868 [2024-07-24 20:02:42.415664] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.868 20:02:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:39.434 20:02:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1952019 00:08:39.434 20:02:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:39.434 20:02:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:39.434 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.369 20:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ddcaadb0-3ac0-46b2-9053-13c5f6f9b547 MY_SNAPSHOT 00:08:40.628 20:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1dd1ef51-0873-477c-bafd-d0d3b5f8b02a 00:08:40.628 20:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ddcaadb0-3ac0-46b2-9053-13c5f6f9b547 30 00:08:41.195 20:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1dd1ef51-0873-477c-bafd-d0d3b5f8b02a MY_CLONE 00:08:41.453 20:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ccee51d5-ee58-421c-aefa-d77cbf05eb2a 00:08:41.453 20:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ccee51d5-ee58-421c-aefa-d77cbf05eb2a 00:08:42.388 20:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1952019 00:08:50.546 Initializing NVMe Controllers 00:08:50.546 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:50.546 Controller IO queue size 128, less than required. 00:08:50.546 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:50.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:50.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:50.546 Initialization complete. Launching workers. 00:08:50.546 ======================================================== 00:08:50.546 Latency(us) 00:08:50.546 Device Information : IOPS MiB/s Average min max 00:08:50.546 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7886.40 30.81 16235.81 375.35 109300.57 00:08:50.546 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7813.20 30.52 16393.27 2817.81 99201.38 00:08:50.546 ======================================================== 00:08:50.546 Total : 15699.60 61.33 16314.18 375.35 109300.57 00:08:50.546 00:08:50.546 20:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:50.546 20:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ddcaadb0-3ac0-46b2-9053-13c5f6f9b547 00:08:50.546 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9ba85fbf-0980-4c56-bc45-8ec30899c0b6 00:08:51.112 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:51.112 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:51.112 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:51.112 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:51.112 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:51.112 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:51.112 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:51.112 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:51.112 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:51.112 rmmod nvme_tcp 00:08:51.112 rmmod nvme_fabrics 00:08:51.112 rmmod nvme_keyring 00:08:51.112 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:51.112 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:51.112 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:51.112 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1951331 ']' 00:08:51.112 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1951331 00:08:51.112 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1951331 ']' 00:08:51.112 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1951331 00:08:51.112 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:51.112 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:51.112 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1951331 00:08:51.112 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:51.112 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:51.112 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1951331' 00:08:51.112 killing process with pid 1951331 00:08:51.112 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1951331 00:08:51.112 20:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1951331 00:08:51.679 20:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:51.679 20:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:51.679 20:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:51.679 20:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:51.679 20:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:51.679 20:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.679 20:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.679 20:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.582 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:53.582 00:08:53.582 real 0m22.125s 00:08:53.582 user 1m14.379s 00:08:53.582 sys 0m6.651s 00:08:53.582 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.582 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:53.582 ************************************ 00:08:53.582 END TEST nvmf_lvol 00:08:53.582 ************************************ 00:08:53.582 20:02:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:53.582 20:02:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:53.582 20:02:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.582 20:02:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.582 ************************************ 00:08:53.582 START TEST nvmf_lvs_grow 00:08:53.582 ************************************ 00:08:53.582 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:53.841 * Looking for test storage... 00:08:53.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:53.841 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.841 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:53.841 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.841 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.841 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.841 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.841 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.841 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.841 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.841 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.841 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.841 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.841 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:53.841 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:53.841 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.841 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.841 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.841 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.841 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.841 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.841 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.841 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.841 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.841 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:08:53.842 20:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.379 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:56.379 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:08:56.379 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:56.379 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:56.379 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:56.379 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:56.379 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:56.379 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:08:56.379 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:56.379 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:56.380 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:56.380 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:56.380 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:56.381 Found net devices under 0000:84:00.0: cvl_0_0 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:56.381 Found net devices under 0000:84:00.1: cvl_0_1 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:56.381 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:56.640 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:56.640 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:56.640 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:56.640 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:56.640 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:56.640 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:56.640 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:56.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:08:56.640 00:08:56.640 --- 10.0.0.2 ping statistics --- 00:08:56.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.640 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:08:56.640 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:56.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:08:56.640 00:08:56.640 --- 10.0.0.1 ping statistics --- 00:08:56.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.640 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:08:56.640 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.640 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:08:56.640 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:56.640 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.640 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:56.640 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:56.640 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.640 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:56.640 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:56.640 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:56.641 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:56.641 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:56.641 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.641 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1955402 00:08:56.641 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:56.641 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1955402 00:08:56.641 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1955402 ']' 00:08:56.641 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.641 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:56.641 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.641 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:56.641 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.641 [2024-07-24 20:03:00.353958] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:08:56.641 [2024-07-24 20:03:00.354060] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.900 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.900 [2024-07-24 20:03:00.475447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.900 [2024-07-24 20:03:00.626304] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.900 [2024-07-24 20:03:00.626398] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.900 [2024-07-24 20:03:00.626440] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.900 [2024-07-24 20:03:00.626477] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.900 [2024-07-24 20:03:00.626492] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.900 [2024-07-24 20:03:00.626539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.160 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:57.160 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:57.160 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:57.160 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:57.160 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:57.160 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.160 20:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:57.729 [2024-07-24 20:03:01.357750] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.729 20:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:57.729 20:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:57.729 20:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.729 20:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:57.729 ************************************ 00:08:57.729 START TEST lvs_grow_clean 00:08:57.729 ************************************ 00:08:57.729 20:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:57.729 20:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:57.729 20:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:57.729 20:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:57.729 20:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:57.729 20:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:57.729 20:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:57.729 20:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:57.729 20:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:57.729 20:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:58.315 20:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:58.315 20:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:58.882 20:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a895bfba-ec5b-4ef7-b2d1-d0a9e66102bf 00:08:58.882 20:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a895bfba-ec5b-4ef7-b2d1-d0a9e66102bf 00:08:58.882 20:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:59.141 20:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:59.141 20:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:59.141 20:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a895bfba-ec5b-4ef7-b2d1-d0a9e66102bf lvol 150 00:08:59.710 20:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=674b42f4-18af-4a71-9c99-5240c4886dfe 00:08:59.710 20:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:59.710 20:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:00.276 [2024-07-24 20:03:03.908930] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:00.276 [2024-07-24 20:03:03.909108] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:00.276 true 00:09:00.276 20:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:00.276 20:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a895bfba-ec5b-4ef7-b2d1-d0a9e66102bf 00:09:00.844 20:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:00.844 20:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:01.102 20:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 674b42f4-18af-4a71-9c99-5240c4886dfe 00:09:01.669 20:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:01.928 [2024-07-24 20:03:05.602879] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.928 20:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:02.495 20:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1956133 00:09:02.495 20:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:02.495 20:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:02.495 20:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1956133 /var/tmp/bdevperf.sock 00:09:02.495 20:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1956133 ']' 00:09:02.495 20:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:02.495 20:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:02.495 20:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:02.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:02.495 20:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:02.495 20:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:02.495 [2024-07-24 20:03:06.048177] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:09:02.495 [2024-07-24 20:03:06.048290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1956133 ] 00:09:02.495 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.495 [2024-07-24 20:03:06.132050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.495 [2024-07-24 20:03:06.270590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.754 20:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:02.754 20:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:02.754 20:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:03.319 Nvme0n1 00:09:03.319 20:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:03.578 [ 00:09:03.578 { 00:09:03.578 "name": "Nvme0n1", 00:09:03.578 "aliases": [ 00:09:03.578 "674b42f4-18af-4a71-9c99-5240c4886dfe" 00:09:03.578 ], 00:09:03.578 "product_name": "NVMe disk", 00:09:03.578 "block_size": 4096, 00:09:03.578 "num_blocks": 38912, 00:09:03.578 "uuid": "674b42f4-18af-4a71-9c99-5240c4886dfe", 00:09:03.578 "assigned_rate_limits": { 00:09:03.578 "rw_ios_per_sec": 0, 00:09:03.578 "rw_mbytes_per_sec": 0, 00:09:03.578 "r_mbytes_per_sec": 0, 00:09:03.578 "w_mbytes_per_sec": 0 00:09:03.578 }, 00:09:03.578 "claimed": false, 00:09:03.578 "zoned": false, 00:09:03.578 "supported_io_types": { 00:09:03.578 "read": true, 00:09:03.578 "write": true, 00:09:03.578 "unmap": true, 00:09:03.578 "flush": true, 00:09:03.578 "reset": true, 00:09:03.578 "nvme_admin": true, 00:09:03.578 "nvme_io": true, 00:09:03.578 "nvme_io_md": false, 00:09:03.578 "write_zeroes": true, 00:09:03.578 "zcopy": false, 00:09:03.578 "get_zone_info": false, 00:09:03.578 "zone_management": false, 00:09:03.578 "zone_append": false, 00:09:03.578 "compare": true, 00:09:03.578 "compare_and_write": true, 00:09:03.578 "abort": true, 00:09:03.578 "seek_hole": false, 00:09:03.578 "seek_data": false, 00:09:03.578 "copy": true, 00:09:03.578 "nvme_iov_md": false 00:09:03.578 }, 00:09:03.578 "memory_domains": [ 00:09:03.578 { 00:09:03.578 "dma_device_id": "system", 00:09:03.578 "dma_device_type": 1 00:09:03.578 } 00:09:03.578 ], 00:09:03.578 "driver_specific": { 00:09:03.578 "nvme": [ 00:09:03.578 { 00:09:03.578 "trid": { 00:09:03.578 "trtype": "TCP", 00:09:03.578 "adrfam": "IPv4", 00:09:03.578 "traddr": "10.0.0.2", 00:09:03.578 "trsvcid": "4420", 00:09:03.578 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:03.578 }, 00:09:03.578 "ctrlr_data": { 00:09:03.578 "cntlid": 1, 00:09:03.578 "vendor_id": "0x8086", 00:09:03.578 "model_number": "SPDK bdev Controller", 00:09:03.578 "serial_number": "SPDK0", 00:09:03.578 "firmware_revision": "24.09", 00:09:03.578 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:03.578 "oacs": { 00:09:03.578 "security": 0, 00:09:03.578 "format": 0, 00:09:03.578 "firmware": 0, 00:09:03.578 "ns_manage": 0 00:09:03.578 }, 00:09:03.578 "multi_ctrlr": true, 00:09:03.578 "ana_reporting": false 00:09:03.578 }, 00:09:03.578 "vs": { 00:09:03.578 "nvme_version": "1.3" 00:09:03.578 }, 00:09:03.578 "ns_data": { 00:09:03.578 "id": 1, 00:09:03.578 "can_share": true 00:09:03.578 } 00:09:03.578 } 00:09:03.578 ], 00:09:03.578 "mp_policy": "active_passive" 00:09:03.578 } 00:09:03.578 } 00:09:03.578 ] 00:09:03.578 20:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1956381 00:09:03.578 20:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:03.578 20:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:03.836 Running I/O for 10 seconds... 00:09:05.250 Latency(us) 00:09:05.250 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.250 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.250 Nvme0n1 : 1.00 11319.00 44.21 0.00 0.00 0.00 0.00 0.00 00:09:05.250 =================================================================================================================== 00:09:05.250 Total : 11319.00 44.21 0.00 0.00 0.00 0.00 0.00 00:09:05.250 00:09:05.815 20:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a895bfba-ec5b-4ef7-b2d1-d0a9e66102bf 00:09:05.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.815 Nvme0n1 : 2.00 11517.50 44.99 0.00 0.00 0.00 0.00 0.00 00:09:05.815 =================================================================================================================== 00:09:05.815 Total : 11517.50 44.99 0.00 0.00 0.00 0.00 0.00 00:09:05.815 00:09:06.074 true 00:09:06.074 20:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a895bfba-ec5b-4ef7-b2d1-d0a9e66102bf 00:09:06.074 20:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:06.333 20:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:06.333 20:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:06.333 20:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1956381 00:09:06.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.899 Nvme0n1 : 3.00 11596.67 45.30 0.00 0.00 0.00 0.00 0.00 00:09:06.899 =================================================================================================================== 00:09:06.899 Total : 11596.67 45.30 0.00 0.00 0.00 0.00 0.00 00:09:06.899 00:09:07.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.834 Nvme0n1 : 4.00 11660.00 45.55 0.00 0.00 0.00 0.00 0.00 00:09:07.834 =================================================================================================================== 00:09:07.834 Total : 11660.00 45.55 0.00 0.00 0.00 0.00 0.00 00:09:07.834 00:09:09.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.211 Nvme0n1 : 5.00 11708.80 45.74 0.00 0.00 0.00 0.00 0.00 00:09:09.211 =================================================================================================================== 00:09:09.211 Total : 11708.80 45.74 0.00 0.00 0.00 0.00 0.00 00:09:09.211 00:09:10.155 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.155 Nvme0n1 : 6.00 11752.67 45.91 0.00 0.00 0.00 0.00 0.00 00:09:10.155 =================================================================================================================== 00:09:10.155 Total : 11752.67 45.91 0.00 0.00 0.00 0.00 0.00 00:09:10.155 00:09:11.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.088 Nvme0n1 : 7.00 11783.86 46.03 0.00 0.00 0.00 0.00 0.00 00:09:11.088 =================================================================================================================== 00:09:11.088 Total : 11783.86 46.03 0.00 0.00 0.00 0.00 0.00 00:09:11.088 00:09:12.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.024 Nvme0n1 : 8.00 11813.25 46.15 0.00 0.00 0.00 0.00 0.00 00:09:12.024 =================================================================================================================== 00:09:12.024 Total : 11813.25 46.15 0.00 0.00 0.00 0.00 0.00 00:09:12.024 00:09:12.959 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.959 Nvme0n1 : 9.00 11839.78 46.25 0.00 0.00 0.00 0.00 0.00 00:09:12.959 =================================================================================================================== 00:09:12.959 Total : 11839.78 46.25 0.00 0.00 0.00 0.00 0.00 00:09:12.959 00:09:13.894 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.894 Nvme0n1 : 10.00 11855.30 46.31 0.00 0.00 0.00 0.00 0.00 00:09:13.894 =================================================================================================================== 00:09:13.894 Total : 11855.30 46.31 0.00 0.00 0.00 0.00 0.00 00:09:13.894 00:09:13.894 00:09:13.894 Latency(us) 00:09:13.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.894 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.894 Nvme0n1 : 10.01 11853.60 46.30 0.00 0.00 10791.35 5971.06 21651.15 00:09:13.894 =================================================================================================================== 00:09:13.894 Total : 11853.60 46.30 0.00 0.00 10791.35 5971.06 21651.15 00:09:13.894 0 00:09:13.894 20:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1956133 00:09:13.894 20:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1956133 ']' 00:09:13.894 20:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1956133 00:09:13.894 20:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:13.894 20:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:13.894 20:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1956133 00:09:13.894 20:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:13.894 20:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:13.894 20:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1956133' 00:09:13.894 killing process with pid 1956133 00:09:13.894 20:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1956133 00:09:13.894 Received shutdown signal, test time was about 10.000000 seconds 00:09:13.894 00:09:13.894 Latency(us) 00:09:13.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.894 =================================================================================================================== 00:09:13.894 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:13.894 20:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1956133 00:09:14.461 20:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:15.033 20:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:15.599 20:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a895bfba-ec5b-4ef7-b2d1-d0a9e66102bf 00:09:15.599 20:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:15.856 20:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:15.857 20:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:15.857 20:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:16.423 [2024-07-24 20:03:19.918289] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:16.423 20:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a895bfba-ec5b-4ef7-b2d1-d0a9e66102bf 00:09:16.423 20:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:16.423 20:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a895bfba-ec5b-4ef7-b2d1-d0a9e66102bf 00:09:16.423 20:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.423 20:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:16.423 20:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.423 20:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:16.423 20:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.423 20:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:16.423 20:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.423 20:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:16.423 20:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a895bfba-ec5b-4ef7-b2d1-d0a9e66102bf 00:09:16.998 request: 00:09:16.998 { 00:09:16.998 "uuid": "a895bfba-ec5b-4ef7-b2d1-d0a9e66102bf", 00:09:16.998 "method": "bdev_lvol_get_lvstores", 00:09:16.998 "req_id": 1 00:09:16.998 } 00:09:16.998 Got JSON-RPC error response 00:09:16.998 response: 00:09:16.998 { 00:09:16.998 "code": -19, 00:09:16.998 "message": "No such device" 00:09:16.998 } 00:09:16.998 20:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:16.998 20:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:16.998 20:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:16.998 20:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:16.998 20:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:17.256 aio_bdev 00:09:17.256 20:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 674b42f4-18af-4a71-9c99-5240c4886dfe 00:09:17.256 20:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=674b42f4-18af-4a71-9c99-5240c4886dfe 00:09:17.256 20:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:17.256 20:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:17.256 20:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:17.256 20:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:17.256 20:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:17.823 20:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 674b42f4-18af-4a71-9c99-5240c4886dfe -t 2000 00:09:18.389 [ 00:09:18.389 { 00:09:18.389 "name": "674b42f4-18af-4a71-9c99-5240c4886dfe", 00:09:18.389 "aliases": [ 00:09:18.389 "lvs/lvol" 00:09:18.389 ], 00:09:18.389 "product_name": "Logical Volume", 00:09:18.389 "block_size": 4096, 00:09:18.389 "num_blocks": 38912, 00:09:18.389 "uuid": "674b42f4-18af-4a71-9c99-5240c4886dfe", 00:09:18.389 "assigned_rate_limits": { 00:09:18.389 "rw_ios_per_sec": 0, 00:09:18.389 "rw_mbytes_per_sec": 0, 00:09:18.389 "r_mbytes_per_sec": 0, 00:09:18.389 "w_mbytes_per_sec": 0 00:09:18.389 }, 00:09:18.389 "claimed": false, 00:09:18.389 "zoned": false, 00:09:18.389 "supported_io_types": { 00:09:18.389 "read": true, 00:09:18.389 "write": true, 00:09:18.389 "unmap": true, 00:09:18.389 "flush": false, 00:09:18.389 "reset": true, 00:09:18.389 "nvme_admin": false, 00:09:18.389 "nvme_io": false, 00:09:18.389 "nvme_io_md": false, 00:09:18.389 "write_zeroes": true, 00:09:18.389 "zcopy": false, 00:09:18.389 "get_zone_info": false, 00:09:18.389 "zone_management": false, 00:09:18.389 "zone_append": false, 00:09:18.389 "compare": false, 00:09:18.389 "compare_and_write": false, 00:09:18.389 "abort": false, 00:09:18.389 "seek_hole": true, 00:09:18.389 "seek_data": true, 00:09:18.389 "copy": false, 00:09:18.389 "nvme_iov_md": false 00:09:18.389 }, 00:09:18.389 "driver_specific": { 00:09:18.389 "lvol": { 00:09:18.389 "lvol_store_uuid": "a895bfba-ec5b-4ef7-b2d1-d0a9e66102bf", 00:09:18.389 "base_bdev": "aio_bdev", 00:09:18.389 "thin_provision": false, 00:09:18.389 "num_allocated_clusters": 38, 00:09:18.389 "snapshot": false, 00:09:18.389 "clone": false, 00:09:18.389 "esnap_clone": false 00:09:18.389 } 00:09:18.389 } 00:09:18.389 } 00:09:18.389 ] 00:09:18.389 20:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:18.389 20:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a895bfba-ec5b-4ef7-b2d1-d0a9e66102bf 00:09:18.389 20:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:18.956 20:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:18.956 20:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:18.956 20:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a895bfba-ec5b-4ef7-b2d1-d0a9e66102bf 00:09:19.215 20:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:19.215 20:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 674b42f4-18af-4a71-9c99-5240c4886dfe 00:09:19.826 20:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a895bfba-ec5b-4ef7-b2d1-d0a9e66102bf 00:09:20.402 20:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:20.969 20:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:20.969 00:09:20.969 real 0m23.273s 00:09:20.969 user 0m22.592s 00:09:20.969 sys 0m2.640s 00:09:20.969 20:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.969 20:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:20.969 ************************************ 00:09:20.969 END TEST lvs_grow_clean 00:09:20.969 ************************************ 00:09:20.969 20:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:20.969 20:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:20.969 20:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.969 20:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:20.969 ************************************ 00:09:20.969 START TEST lvs_grow_dirty 00:09:20.969 ************************************ 00:09:20.969 20:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:20.969 20:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:20.969 20:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:20.969 20:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:20.969 20:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:20.969 20:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:20.969 20:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:20.969 20:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:20.969 20:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:21.228 20:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:21.486 20:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:21.486 20:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:22.055 20:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=523f4b71-3991-4ac3-b2c7-b9fba9bb07d5 00:09:22.055 20:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 523f4b71-3991-4ac3-b2c7-b9fba9bb07d5 00:09:22.055 20:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:22.621 20:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:22.621 20:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:22.621 20:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 523f4b71-3991-4ac3-b2c7-b9fba9bb07d5 lvol 150 00:09:22.880 20:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=cb52ab14-c16c-4b7f-8210-e160838d3861 00:09:22.880 20:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:22.880 20:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:23.447 [2024-07-24 20:03:27.035575] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:23.447 [2024-07-24 20:03:27.035710] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:23.447 true 00:09:23.447 20:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 523f4b71-3991-4ac3-b2c7-b9fba9bb07d5 00:09:23.447 20:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:23.705 20:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:23.705 20:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:24.273 20:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cb52ab14-c16c-4b7f-8210-e160838d3861 00:09:24.840 20:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:25.098 [2024-07-24 20:03:28.866187] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.357 20:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:25.921 20:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1959459 00:09:25.921 20:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:25.921 20:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:25.921 20:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1959459 /var/tmp/bdevperf.sock 00:09:25.921 20:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1959459 ']' 00:09:25.921 20:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:25.921 20:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:25.921 20:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:25.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:25.921 20:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:25.921 20:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:25.921 [2024-07-24 20:03:29.520607] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:09:25.921 [2024-07-24 20:03:29.520709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1959459 ] 00:09:25.921 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.921 [2024-07-24 20:03:29.623958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.179 [2024-07-24 20:03:29.766171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.179 20:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:26.179 20:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:26.179 20:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:26.745 Nvme0n1 00:09:26.745 20:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:27.311 [ 00:09:27.311 { 00:09:27.311 "name": "Nvme0n1", 00:09:27.311 "aliases": [ 00:09:27.311 "cb52ab14-c16c-4b7f-8210-e160838d3861" 00:09:27.311 ], 00:09:27.311 "product_name": "NVMe disk", 00:09:27.311 "block_size": 4096, 00:09:27.311 "num_blocks": 38912, 00:09:27.311 "uuid": "cb52ab14-c16c-4b7f-8210-e160838d3861", 00:09:27.311 "assigned_rate_limits": { 00:09:27.311 "rw_ios_per_sec": 0, 00:09:27.311 "rw_mbytes_per_sec": 0, 00:09:27.311 "r_mbytes_per_sec": 0, 00:09:27.311 "w_mbytes_per_sec": 0 00:09:27.311 }, 00:09:27.311 "claimed": false, 00:09:27.311 "zoned": false, 00:09:27.311 "supported_io_types": { 00:09:27.311 "read": true, 00:09:27.311 "write": true, 00:09:27.311 "unmap": true, 00:09:27.311 "flush": true, 00:09:27.311 "reset": true, 00:09:27.311 "nvme_admin": true, 00:09:27.311 "nvme_io": true, 00:09:27.311 "nvme_io_md": false, 00:09:27.311 "write_zeroes": true, 00:09:27.311 "zcopy": false, 00:09:27.311 "get_zone_info": false, 00:09:27.311 "zone_management": false, 00:09:27.311 "zone_append": false, 00:09:27.311 "compare": true, 00:09:27.311 "compare_and_write": true, 00:09:27.311 "abort": true, 00:09:27.311 "seek_hole": false, 00:09:27.311 "seek_data": false, 00:09:27.311 "copy": true, 00:09:27.311 "nvme_iov_md": false 00:09:27.311 }, 00:09:27.311 "memory_domains": [ 00:09:27.311 { 00:09:27.311 "dma_device_id": "system", 00:09:27.311 "dma_device_type": 1 00:09:27.311 } 00:09:27.311 ], 00:09:27.311 "driver_specific": { 00:09:27.311 "nvme": [ 00:09:27.311 { 00:09:27.311 "trid": { 00:09:27.311 "trtype": "TCP", 00:09:27.311 "adrfam": "IPv4", 00:09:27.311 "traddr": "10.0.0.2", 00:09:27.311 "trsvcid": "4420", 00:09:27.311 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:27.311 }, 00:09:27.311 "ctrlr_data": { 00:09:27.311 "cntlid": 1, 00:09:27.311 "vendor_id": "0x8086", 00:09:27.311 "model_number": "SPDK bdev Controller", 00:09:27.311 "serial_number": "SPDK0", 00:09:27.311 "firmware_revision": "24.09", 00:09:27.311 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:27.311 "oacs": { 00:09:27.311 "security": 0, 00:09:27.311 "format": 0, 00:09:27.311 "firmware": 0, 00:09:27.311 "ns_manage": 0 00:09:27.311 }, 00:09:27.311 "multi_ctrlr": true, 00:09:27.311 "ana_reporting": false 00:09:27.311 }, 00:09:27.311 "vs": { 00:09:27.311 "nvme_version": "1.3" 00:09:27.311 }, 00:09:27.311 "ns_data": { 00:09:27.311 "id": 1, 00:09:27.311 "can_share": true 00:09:27.311 } 00:09:27.311 } 00:09:27.311 ], 00:09:27.311 "mp_policy": "active_passive" 00:09:27.311 } 00:09:27.311 } 00:09:27.311 ] 00:09:27.311 20:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1959597 00:09:27.311 20:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:27.311 20:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:27.569 Running I/O for 10 seconds... 00:09:28.503 Latency(us) 00:09:28.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.503 Nvme0n1 : 1.00 11431.00 44.65 0.00 0.00 0.00 0.00 0.00 00:09:28.503 =================================================================================================================== 00:09:28.503 Total : 11431.00 44.65 0.00 0.00 0.00 0.00 0.00 00:09:28.503 00:09:29.437 20:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 523f4b71-3991-4ac3-b2c7-b9fba9bb07d5 00:09:29.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.437 Nvme0n1 : 2.00 11526.50 45.03 0.00 0.00 0.00 0.00 0.00 00:09:29.437 =================================================================================================================== 00:09:29.437 Total : 11526.50 45.03 0.00 0.00 0.00 0.00 0.00 00:09:29.437 00:09:29.695 true 00:09:29.695 20:03:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 523f4b71-3991-4ac3-b2c7-b9fba9bb07d5 00:09:29.695 20:03:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:30.261 20:03:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:30.261 20:03:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:30.261 20:03:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1959597 00:09:30.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.519 Nvme0n1 : 3.00 11602.67 45.32 0.00 0.00 0.00 0.00 0.00 00:09:30.519 =================================================================================================================== 00:09:30.519 Total : 11602.67 45.32 0.00 0.00 0.00 0.00 0.00 00:09:30.519 00:09:31.452 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.452 Nvme0n1 : 4.00 11687.25 45.65 0.00 0.00 0.00 0.00 0.00 00:09:31.452 =================================================================================================================== 00:09:31.452 Total : 11687.25 45.65 0.00 0.00 0.00 0.00 0.00 00:09:31.452 00:09:32.387 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.387 Nvme0n1 : 5.00 11725.40 45.80 0.00 0.00 0.00 0.00 0.00 00:09:32.387 =================================================================================================================== 00:09:32.387 Total : 11725.40 45.80 0.00 0.00 0.00 0.00 0.00 00:09:32.387 00:09:33.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.784 Nvme0n1 : 6.00 11771.33 45.98 0.00 0.00 0.00 0.00 0.00 00:09:33.784 =================================================================================================================== 00:09:33.784 Total : 11771.33 45.98 0.00 0.00 0.00 0.00 0.00 00:09:33.784 00:09:34.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.367 Nvme0n1 : 7.00 11804.43 46.11 0.00 0.00 0.00 0.00 0.00 00:09:34.367 =================================================================================================================== 00:09:34.367 Total : 11804.43 46.11 0.00 0.00 0.00 0.00 0.00 00:09:34.367 00:09:35.743 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.743 Nvme0n1 : 8.00 11829.25 46.21 0.00 0.00 0.00 0.00 0.00 00:09:35.743 =================================================================================================================== 00:09:35.743 Total : 11829.25 46.21 0.00 0.00 0.00 0.00 0.00 00:09:35.743 00:09:36.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.678 Nvme0n1 : 9.00 11856.11 46.31 0.00 0.00 0.00 0.00 0.00 00:09:36.678 =================================================================================================================== 00:09:36.678 Total : 11856.11 46.31 0.00 0.00 0.00 0.00 0.00 00:09:36.678 00:09:37.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.614 Nvme0n1 : 10.00 11878.50 46.40 0.00 0.00 0.00 0.00 0.00 00:09:37.614 =================================================================================================================== 00:09:37.614 Total : 11878.50 46.40 0.00 0.00 0.00 0.00 0.00 00:09:37.614 00:09:37.614 00:09:37.614 Latency(us) 00:09:37.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.614 Nvme0n1 : 10.01 11880.86 46.41 0.00 0.00 10767.70 6359.42 21651.15 00:09:37.614 =================================================================================================================== 00:09:37.614 Total : 11880.86 46.41 0.00 0.00 10767.70 6359.42 21651.15 00:09:37.614 0 00:09:37.614 20:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1959459 00:09:37.614 20:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1959459 ']' 00:09:37.614 20:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1959459 00:09:37.614 20:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:37.614 20:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:37.614 20:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1959459 00:09:37.614 20:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:37.614 20:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:37.614 20:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1959459' 00:09:37.614 killing process with pid 1959459 00:09:37.614 20:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1959459 00:09:37.614 Received shutdown signal, test time was about 10.000000 seconds 00:09:37.614 00:09:37.614 Latency(us) 00:09:37.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.614 =================================================================================================================== 00:09:37.614 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:37.614 20:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1959459 00:09:37.872 20:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:38.131 20:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:38.699 20:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 523f4b71-3991-4ac3-b2c7-b9fba9bb07d5 00:09:38.699 20:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:38.959 20:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:38.959 20:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:38.959 20:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1955402 00:09:38.959 20:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1955402 00:09:38.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1955402 Killed "${NVMF_APP[@]}" "$@" 00:09:38.959 20:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:38.959 20:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:38.959 20:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:38.959 20:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:38.959 20:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:38.959 20:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1960941 00:09:38.959 20:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:38.959 20:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1960941 00:09:38.959 20:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1960941 ']' 00:09:38.959 20:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.959 20:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:38.959 20:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.959 20:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:38.959 20:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:38.959 [2024-07-24 20:03:42.740932] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:09:38.959 [2024-07-24 20:03:42.741040] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.225 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.225 [2024-07-24 20:03:42.856975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.484 [2024-07-24 20:03:43.063157] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.484 [2024-07-24 20:03:43.063222] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.484 [2024-07-24 20:03:43.063243] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.484 [2024-07-24 20:03:43.063261] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.484 [2024-07-24 20:03:43.063275] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.484 [2024-07-24 20:03:43.063310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.484 20:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:39.484 20:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:39.484 20:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:39.484 20:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:39.484 20:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:39.484 20:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.484 20:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:40.430 [2024-07-24 20:03:43.870018] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:40.430 [2024-07-24 20:03:43.870337] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:40.430 [2024-07-24 20:03:43.870486] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:40.430 20:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:40.430 20:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev cb52ab14-c16c-4b7f-8210-e160838d3861 00:09:40.430 20:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=cb52ab14-c16c-4b7f-8210-e160838d3861 00:09:40.430 20:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:40.430 20:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:40.430 20:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:40.430 20:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:40.430 20:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:40.430 20:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cb52ab14-c16c-4b7f-8210-e160838d3861 -t 2000 00:09:40.997 [ 00:09:40.997 { 00:09:40.997 "name": "cb52ab14-c16c-4b7f-8210-e160838d3861", 00:09:40.997 "aliases": [ 00:09:40.997 "lvs/lvol" 00:09:40.997 ], 00:09:40.997 "product_name": "Logical Volume", 00:09:40.997 "block_size": 4096, 00:09:40.997 "num_blocks": 38912, 00:09:40.997 "uuid": "cb52ab14-c16c-4b7f-8210-e160838d3861", 00:09:40.997 "assigned_rate_limits": { 00:09:40.997 "rw_ios_per_sec": 0, 00:09:40.997 "rw_mbytes_per_sec": 0, 00:09:40.997 "r_mbytes_per_sec": 0, 00:09:40.997 "w_mbytes_per_sec": 0 00:09:40.997 }, 00:09:40.997 "claimed": false, 00:09:40.997 "zoned": false, 00:09:40.997 "supported_io_types": { 00:09:40.997 "read": true, 00:09:40.997 "write": true, 00:09:40.997 "unmap": true, 00:09:40.997 "flush": false, 00:09:40.997 "reset": true, 00:09:40.997 "nvme_admin": false, 00:09:40.997 "nvme_io": false, 00:09:40.997 "nvme_io_md": false, 00:09:40.997 "write_zeroes": true, 00:09:40.997 "zcopy": false, 00:09:40.997 "get_zone_info": false, 00:09:40.997 "zone_management": false, 00:09:40.997 "zone_append": false, 00:09:40.997 "compare": false, 00:09:40.997 "compare_and_write": false, 00:09:40.997 "abort": false, 00:09:40.997 "seek_hole": true, 00:09:40.997 "seek_data": true, 00:09:40.997 "copy": false, 00:09:40.997 "nvme_iov_md": false 00:09:40.997 }, 00:09:40.997 "driver_specific": { 00:09:40.997 "lvol": { 00:09:40.997 "lvol_store_uuid": "523f4b71-3991-4ac3-b2c7-b9fba9bb07d5", 00:09:40.997 "base_bdev": "aio_bdev", 00:09:40.997 "thin_provision": false, 00:09:40.997 "num_allocated_clusters": 38, 00:09:40.997 "snapshot": false, 00:09:40.997 "clone": false, 00:09:40.997 "esnap_clone": false 00:09:40.997 } 00:09:40.997 } 00:09:40.997 } 00:09:40.997 ] 00:09:40.997 20:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:40.997 20:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 523f4b71-3991-4ac3-b2c7-b9fba9bb07d5 00:09:40.997 20:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:41.564 20:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:41.564 20:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 523f4b71-3991-4ac3-b2c7-b9fba9bb07d5 00:09:41.564 20:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:42.131 20:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:42.131 20:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:42.389 [2024-07-24 20:03:46.165450] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:42.648 20:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 523f4b71-3991-4ac3-b2c7-b9fba9bb07d5 00:09:42.648 20:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:42.648 20:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 523f4b71-3991-4ac3-b2c7-b9fba9bb07d5 00:09:42.648 20:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:42.648 20:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:42.648 20:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:42.648 20:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:42.648 20:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:42.648 20:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:42.648 20:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:42.648 20:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:42.648 20:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 523f4b71-3991-4ac3-b2c7-b9fba9bb07d5 00:09:42.906 request: 00:09:42.906 { 00:09:42.906 "uuid": "523f4b71-3991-4ac3-b2c7-b9fba9bb07d5", 00:09:42.906 "method": "bdev_lvol_get_lvstores", 00:09:42.906 "req_id": 1 00:09:42.906 } 00:09:42.906 Got JSON-RPC error response 00:09:42.906 response: 00:09:42.906 { 00:09:42.906 "code": -19, 00:09:42.906 "message": "No such device" 00:09:42.906 } 00:09:42.906 20:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:42.906 20:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:42.906 20:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:42.906 20:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:42.906 20:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:43.473 aio_bdev 00:09:43.474 20:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev cb52ab14-c16c-4b7f-8210-e160838d3861 00:09:43.474 20:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=cb52ab14-c16c-4b7f-8210-e160838d3861 00:09:43.474 20:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:43.474 20:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:43.474 20:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:43.474 20:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:43.474 20:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:43.732 20:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cb52ab14-c16c-4b7f-8210-e160838d3861 -t 2000 00:09:44.299 [ 00:09:44.299 { 00:09:44.299 "name": "cb52ab14-c16c-4b7f-8210-e160838d3861", 00:09:44.299 "aliases": [ 00:09:44.299 "lvs/lvol" 00:09:44.299 ], 00:09:44.299 "product_name": "Logical Volume", 00:09:44.299 "block_size": 4096, 00:09:44.299 "num_blocks": 38912, 00:09:44.299 "uuid": "cb52ab14-c16c-4b7f-8210-e160838d3861", 00:09:44.299 "assigned_rate_limits": { 00:09:44.299 "rw_ios_per_sec": 0, 00:09:44.299 "rw_mbytes_per_sec": 0, 00:09:44.299 "r_mbytes_per_sec": 0, 00:09:44.299 "w_mbytes_per_sec": 0 00:09:44.299 }, 00:09:44.299 "claimed": false, 00:09:44.299 "zoned": false, 00:09:44.299 "supported_io_types": { 00:09:44.299 "read": true, 00:09:44.299 "write": true, 00:09:44.299 "unmap": true, 00:09:44.299 "flush": false, 00:09:44.299 "reset": true, 00:09:44.299 "nvme_admin": false, 00:09:44.299 "nvme_io": false, 00:09:44.299 "nvme_io_md": false, 00:09:44.299 "write_zeroes": true, 00:09:44.299 "zcopy": false, 00:09:44.299 "get_zone_info": false, 00:09:44.299 "zone_management": false, 00:09:44.299 "zone_append": false, 00:09:44.299 "compare": false, 00:09:44.299 "compare_and_write": false, 00:09:44.299 "abort": false, 00:09:44.299 "seek_hole": true, 00:09:44.299 "seek_data": true, 00:09:44.299 "copy": false, 00:09:44.299 "nvme_iov_md": false 00:09:44.299 }, 00:09:44.299 "driver_specific": { 00:09:44.299 "lvol": { 00:09:44.299 "lvol_store_uuid": "523f4b71-3991-4ac3-b2c7-b9fba9bb07d5", 00:09:44.299 "base_bdev": "aio_bdev", 00:09:44.299 "thin_provision": false, 00:09:44.299 "num_allocated_clusters": 38, 00:09:44.299 "snapshot": false, 00:09:44.299 "clone": false, 00:09:44.299 "esnap_clone": false 00:09:44.299 } 00:09:44.299 } 00:09:44.299 } 00:09:44.299 ] 00:09:44.299 20:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:44.299 20:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 523f4b71-3991-4ac3-b2c7-b9fba9bb07d5 00:09:44.299 20:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:44.867 20:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:44.867 20:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 523f4b71-3991-4ac3-b2c7-b9fba9bb07d5 00:09:44.867 20:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:45.126 20:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:45.126 20:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cb52ab14-c16c-4b7f-8210-e160838d3861 00:09:45.694 20:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 523f4b71-3991-4ac3-b2c7-b9fba9bb07d5 00:09:46.260 20:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:46.518 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:46.777 00:09:46.777 real 0m25.557s 00:09:46.777 user 1m3.541s 00:09:46.777 sys 0m6.052s 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:46.777 ************************************ 00:09:46.777 END TEST lvs_grow_dirty 00:09:46.777 ************************************ 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:46.777 nvmf_trace.0 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:46.777 rmmod nvme_tcp 00:09:46.777 rmmod nvme_fabrics 00:09:46.777 rmmod nvme_keyring 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1960941 ']' 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1960941 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1960941 ']' 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1960941 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1960941 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1960941' 00:09:46.777 killing process with pid 1960941 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1960941 00:09:46.777 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1960941 00:09:47.343 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:47.343 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:47.343 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:47.343 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:47.343 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:47.343 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.343 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.343 20:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.333 20:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:49.333 00:09:49.333 real 0m55.576s 00:09:49.333 user 1m35.769s 00:09:49.333 sys 0m11.349s 00:09:49.333 20:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.333 20:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:49.333 ************************************ 00:09:49.333 END TEST nvmf_lvs_grow 00:09:49.333 ************************************ 00:09:49.333 20:03:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:49.333 20:03:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:49.333 20:03:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.333 20:03:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:49.333 ************************************ 00:09:49.333 START TEST nvmf_bdev_io_wait 00:09:49.333 ************************************ 00:09:49.333 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:49.333 * Looking for test storage... 00:09:49.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:49.333 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:49.333 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:49.333 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.333 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.333 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.333 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.333 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.333 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.333 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.333 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.333 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.333 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:49.593 20:03:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.884 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:52.884 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:52.884 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:52.885 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:52.885 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:52.885 Found net devices under 0000:84:00.0: cvl_0_0 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:52.885 Found net devices under 0000:84:00.1: cvl_0_1 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:52.885 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.886 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.886 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:52.886 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:52.886 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:52.886 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:52.886 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:52.886 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:52.886 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.886 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:52.886 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:52.886 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:52.886 20:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:52.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:09:52.886 00:09:52.886 --- 10.0.0.2 ping statistics --- 00:09:52.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.886 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:52.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:09:52.886 00:09:52.886 --- 10.0.0.1 ping statistics --- 00:09:52.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.886 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1964003 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1964003 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1964003 ']' 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:52.886 20:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.886 [2024-07-24 20:03:56.250261] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:09:52.886 [2024-07-24 20:03:56.250361] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.886 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.886 [2024-07-24 20:03:56.360904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:52.886 [2024-07-24 20:03:56.576426] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:52.886 [2024-07-24 20:03:56.576551] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:52.886 [2024-07-24 20:03:56.576572] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:52.886 [2024-07-24 20:03:56.576588] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:52.886 [2024-07-24 20:03:56.576602] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:52.886 [2024-07-24 20:03:56.576685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.886 [2024-07-24 20:03:56.576750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:52.886 [2024-07-24 20:03:56.576807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:52.886 [2024-07-24 20:03:56.576811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.823 [2024-07-24 20:03:57.448109] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.823 Malloc0 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.823 [2024-07-24 20:03:57.528334] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1964164 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1964166 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1964168 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:53.823 { 00:09:53.823 "params": { 00:09:53.823 "name": "Nvme$subsystem", 00:09:53.823 "trtype": "$TEST_TRANSPORT", 00:09:53.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.823 "adrfam": "ipv4", 00:09:53.823 "trsvcid": "$NVMF_PORT", 00:09:53.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.823 "hdgst": ${hdgst:-false}, 00:09:53.823 "ddgst": ${ddgst:-false} 00:09:53.823 }, 00:09:53.823 "method": "bdev_nvme_attach_controller" 00:09:53.823 } 00:09:53.823 EOF 00:09:53.823 )") 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:53.823 { 00:09:53.823 "params": { 00:09:53.823 "name": "Nvme$subsystem", 00:09:53.823 "trtype": "$TEST_TRANSPORT", 00:09:53.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.823 "adrfam": "ipv4", 00:09:53.823 "trsvcid": "$NVMF_PORT", 00:09:53.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.823 "hdgst": ${hdgst:-false}, 00:09:53.823 "ddgst": ${ddgst:-false} 00:09:53.823 }, 00:09:53.823 "method": "bdev_nvme_attach_controller" 00:09:53.823 } 00:09:53.823 EOF 00:09:53.823 )") 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1964170 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:53.823 { 00:09:53.823 "params": { 00:09:53.823 "name": "Nvme$subsystem", 00:09:53.823 "trtype": "$TEST_TRANSPORT", 00:09:53.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.823 "adrfam": "ipv4", 00:09:53.823 "trsvcid": "$NVMF_PORT", 00:09:53.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.823 "hdgst": ${hdgst:-false}, 00:09:53.823 "ddgst": ${ddgst:-false} 00:09:53.823 }, 00:09:53.823 "method": "bdev_nvme_attach_controller" 00:09:53.823 } 00:09:53.823 EOF 00:09:53.823 )") 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:53.823 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:53.823 { 00:09:53.823 "params": { 00:09:53.823 "name": "Nvme$subsystem", 00:09:53.823 "trtype": "$TEST_TRANSPORT", 00:09:53.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.823 "adrfam": "ipv4", 00:09:53.824 "trsvcid": "$NVMF_PORT", 00:09:53.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.824 "hdgst": ${hdgst:-false}, 00:09:53.824 "ddgst": ${ddgst:-false} 00:09:53.824 }, 00:09:53.824 "method": "bdev_nvme_attach_controller" 00:09:53.824 } 00:09:53.824 EOF 00:09:53.824 )") 00:09:53.824 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:53.824 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1964164 00:09:53.824 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:53.824 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:53.824 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:53.824 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:53.824 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:53.824 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:53.824 "params": { 00:09:53.824 "name": "Nvme1", 00:09:53.824 "trtype": "tcp", 00:09:53.824 "traddr": "10.0.0.2", 00:09:53.824 "adrfam": "ipv4", 00:09:53.824 "trsvcid": "4420", 00:09:53.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.824 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.824 "hdgst": false, 00:09:53.824 "ddgst": false 00:09:53.824 }, 00:09:53.824 "method": "bdev_nvme_attach_controller" 00:09:53.824 }' 00:09:53.824 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:53.824 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:53.824 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:53.824 "params": { 00:09:53.824 "name": "Nvme1", 00:09:53.824 "trtype": "tcp", 00:09:53.824 "traddr": "10.0.0.2", 00:09:53.824 "adrfam": "ipv4", 00:09:53.824 "trsvcid": "4420", 00:09:53.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.824 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.824 "hdgst": false, 00:09:53.824 "ddgst": false 00:09:53.824 }, 00:09:53.824 "method": "bdev_nvme_attach_controller" 00:09:53.824 }' 00:09:53.824 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:53.824 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:53.824 "params": { 00:09:53.824 "name": "Nvme1", 00:09:53.824 "trtype": "tcp", 00:09:53.824 "traddr": "10.0.0.2", 00:09:53.824 "adrfam": "ipv4", 00:09:53.824 "trsvcid": "4420", 00:09:53.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.824 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.824 "hdgst": false, 00:09:53.824 "ddgst": false 00:09:53.824 }, 00:09:53.824 "method": "bdev_nvme_attach_controller" 00:09:53.824 }' 00:09:53.824 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:53.824 20:03:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:53.824 "params": { 00:09:53.824 "name": "Nvme1", 00:09:53.824 "trtype": "tcp", 00:09:53.824 "traddr": "10.0.0.2", 00:09:53.824 "adrfam": "ipv4", 00:09:53.824 "trsvcid": "4420", 00:09:53.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.824 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.824 "hdgst": false, 00:09:53.824 "ddgst": false 00:09:53.824 }, 00:09:53.824 "method": "bdev_nvme_attach_controller" 00:09:53.824 }' 00:09:53.824 [2024-07-24 20:03:57.580384] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:09:53.824 [2024-07-24 20:03:57.580386] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:09:53.824 [2024-07-24 20:03:57.580384] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:09:53.824 [2024-07-24 20:03:57.580496] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 20:03:57.580496] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 20:03:57.580497] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:53.824 --proc-type=auto ] 00:09:53.824 --proc-type=auto ] 00:09:53.824 [2024-07-24 20:03:57.593851] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:09:53.824 [2024-07-24 20:03:57.593939] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:54.083 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.083 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.083 [2024-07-24 20:03:57.761830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.083 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.341 [2024-07-24 20:03:57.878625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.341 [2024-07-24 20:03:57.882511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:54.341 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.341 [2024-07-24 20:03:58.006590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:54.341 [2024-07-24 20:03:58.021915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.341 [2024-07-24 20:03:58.109965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.600 [2024-07-24 20:03:58.145677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:54.600 [2024-07-24 20:03:58.227399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:54.600 Running I/O for 1 seconds... 00:09:54.600 Running I/O for 1 seconds... 00:09:54.857 Running I/O for 1 seconds... 00:09:54.857 Running I/O for 1 seconds... 00:09:55.792 00:09:55.792 Latency(us) 00:09:55.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.792 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:55.792 Nvme1n1 : 1.01 9021.39 35.24 0.00 0.00 14122.86 6602.15 22136.60 00:09:55.792 =================================================================================================================== 00:09:55.792 Total : 9021.39 35.24 0.00 0.00 14122.86 6602.15 22136.60 00:09:55.792 00:09:55.792 Latency(us) 00:09:55.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.792 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:55.792 Nvme1n1 : 1.02 3888.51 15.19 0.00 0.00 32368.14 7718.68 44855.75 00:09:55.792 =================================================================================================================== 00:09:55.792 Total : 3888.51 15.19 0.00 0.00 32368.14 7718.68 44855.75 00:09:55.792 00:09:55.792 Latency(us) 00:09:55.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.792 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:55.792 Nvme1n1 : 1.00 150931.33 589.58 0.00 0.00 844.43 350.44 1177.22 00:09:55.792 =================================================================================================================== 00:09:55.792 Total : 150931.33 589.58 0.00 0.00 844.43 350.44 1177.22 00:09:55.792 00:09:55.792 Latency(us) 00:09:55.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.792 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:55.792 Nvme1n1 : 1.01 3874.88 15.14 0.00 0.00 32837.62 11262.48 64079.64 00:09:55.792 =================================================================================================================== 00:09:55.792 Total : 3874.88 15.14 0.00 0.00 32837.62 11262.48 64079.64 00:09:56.088 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1964166 00:09:56.347 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1964168 00:09:56.347 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1964170 00:09:56.347 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:56.347 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.347 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:56.347 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.347 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:56.347 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:56.347 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:56.347 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:56.347 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:56.347 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:56.347 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:56.347 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:56.347 rmmod nvme_tcp 00:09:56.347 rmmod nvme_fabrics 00:09:56.347 rmmod nvme_keyring 00:09:56.347 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:56.347 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:56.347 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:56.347 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1964003 ']' 00:09:56.347 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1964003 00:09:56.347 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1964003 ']' 00:09:56.347 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1964003 00:09:56.347 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:56.347 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:56.347 20:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1964003 00:09:56.347 20:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:56.347 20:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:56.347 20:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1964003' 00:09:56.347 killing process with pid 1964003 00:09:56.347 20:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1964003 00:09:56.347 20:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1964003 00:09:56.914 20:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:56.914 20:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:56.914 20:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:56.914 20:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:56.914 20:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:56.914 20:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.914 20:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.914 20:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.813 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:58.813 00:09:58.813 real 0m9.428s 00:09:58.813 user 0m22.003s 00:09:58.813 sys 0m4.466s 00:09:58.813 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:58.813 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:58.813 ************************************ 00:09:58.813 END TEST nvmf_bdev_io_wait 00:09:58.813 ************************************ 00:09:58.813 20:04:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:58.813 20:04:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:58.813 20:04:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:58.813 20:04:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:58.813 ************************************ 00:09:58.813 START TEST nvmf_queue_depth 00:09:58.813 ************************************ 00:09:58.813 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:58.813 * Looking for test storage... 00:09:58.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:58.813 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:58.813 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:58.813 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.813 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.813 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.813 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.813 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.813 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.813 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.813 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.813 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.814 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:09:59.072 20:04:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:02.358 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:02.358 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:10:02.358 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:02.358 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:02.358 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:02.358 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:02.358 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:02.358 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:10:02.358 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:02.358 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:10:02.358 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:10:02.358 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:10:02.358 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:10:02.358 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:10:02.358 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:10:02.358 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:02.358 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:02.358 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:02.358 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:02.358 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:02.358 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:02.358 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:02.358 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:02.358 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:02.359 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:02.359 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:02.359 Found net devices under 0000:84:00.0: cvl_0_0 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:02.359 Found net devices under 0000:84:00.1: cvl_0_1 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:02.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:10:02.359 00:10:02.359 --- 10.0.0.2 ping statistics --- 00:10:02.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.359 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:02.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:10:02.359 00:10:02.359 --- 10.0.0.1 ping statistics --- 00:10:02.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.359 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1966543 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1966543 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1966543 ']' 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:02.359 20:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:02.359 [2024-07-24 20:04:05.700574] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:10:02.359 [2024-07-24 20:04:05.700673] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.359 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.359 [2024-07-24 20:04:05.793691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.359 [2024-07-24 20:04:05.932451] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.360 [2024-07-24 20:04:05.932523] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.360 [2024-07-24 20:04:05.932542] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.360 [2024-07-24 20:04:05.932559] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.360 [2024-07-24 20:04:05.932573] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.360 [2024-07-24 20:04:05.932620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.360 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.360 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:02.360 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:02.360 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:02.360 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:02.360 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.360 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:02.360 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.360 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:02.360 [2024-07-24 20:04:06.109926] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:02.360 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.360 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:02.360 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.360 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:02.619 Malloc0 00:10:02.619 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.619 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:02.619 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.619 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:02.619 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.619 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:02.619 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.619 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:02.619 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.619 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:02.619 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.619 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:02.619 [2024-07-24 20:04:06.185875] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:02.619 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.619 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1966564 00:10:02.619 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:02.619 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:02.619 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1966564 /var/tmp/bdevperf.sock 00:10:02.619 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1966564 ']' 00:10:02.619 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:02.619 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:02.619 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:02.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:02.619 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:02.619 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:02.619 [2024-07-24 20:04:06.279873] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:10:02.619 [2024-07-24 20:04:06.280020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1966564 ] 00:10:02.619 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.619 [2024-07-24 20:04:06.373400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.878 [2024-07-24 20:04:06.518967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.878 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.878 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:02.878 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:02.878 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.878 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.136 NVMe0n1 00:10:03.136 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.136 20:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:03.136 Running I/O for 10 seconds... 00:10:15.341 00:10:15.341 Latency(us) 00:10:15.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:15.341 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:15.341 Verification LBA range: start 0x0 length 0x4000 00:10:15.341 NVMe0n1 : 10.12 6772.01 26.45 0.00 0.00 150429.18 30486.38 90876.59 00:10:15.341 =================================================================================================================== 00:10:15.341 Total : 6772.01 26.45 0.00 0.00 150429.18 30486.38 90876.59 00:10:15.341 0 00:10:15.341 20:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1966564 00:10:15.341 20:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1966564 ']' 00:10:15.341 20:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1966564 00:10:15.341 20:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1966564 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1966564' 00:10:15.341 killing process with pid 1966564 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1966564 00:10:15.341 Received shutdown signal, test time was about 10.000000 seconds 00:10:15.341 00:10:15.341 Latency(us) 00:10:15.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:15.341 =================================================================================================================== 00:10:15.341 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1966564 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:15.341 rmmod nvme_tcp 00:10:15.341 rmmod nvme_fabrics 00:10:15.341 rmmod nvme_keyring 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1966543 ']' 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1966543 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1966543 ']' 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1966543 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1966543 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1966543' 00:10:15.341 killing process with pid 1966543 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1966543 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1966543 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:15.341 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:15.342 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:15.342 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:15.342 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:15.342 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.342 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.342 20:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:16.278 00:10:16.278 real 0m17.353s 00:10:16.278 user 0m23.199s 00:10:16.278 sys 0m4.103s 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.278 ************************************ 00:10:16.278 END TEST nvmf_queue_depth 00:10:16.278 ************************************ 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:16.278 ************************************ 00:10:16.278 START TEST nvmf_target_multipath 00:10:16.278 ************************************ 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:16.278 * Looking for test storage... 00:10:16.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.278 20:04:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:10:16.279 20:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:19.565 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:19.565 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:10:19.565 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:19.565 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:19.565 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:19.565 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:19.565 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:19.565 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:10:19.565 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:19.565 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:10:19.565 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:10:19.565 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:10:19.565 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:10:19.565 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:10:19.565 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:10:19.565 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:19.565 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:19.565 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:19.565 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:19.565 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:19.566 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:19.566 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:19.566 Found net devices under 0000:84:00.0: cvl_0_0 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:19.566 Found net devices under 0000:84:00.1: cvl_0_1 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:19.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:19.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:10:19.566 00:10:19.566 --- 10.0.0.2 ping statistics --- 00:10:19.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.566 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:19.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:19.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:10:19.566 00:10:19.566 --- 10.0.0.1 ping statistics --- 00:10:19.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.566 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:19.566 only one NIC for nvmf test 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:19.566 rmmod nvme_tcp 00:10:19.566 rmmod nvme_fabrics 00:10:19.566 rmmod nvme_keyring 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:19.566 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:19.567 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:19.567 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:19.567 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:19.567 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:19.567 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:19.567 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:19.567 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:19.567 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.567 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.567 20:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:21.469 00:10:21.469 real 0m5.146s 00:10:21.469 user 0m0.923s 00:10:21.469 sys 0m2.220s 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:21.469 ************************************ 00:10:21.469 END TEST nvmf_target_multipath 00:10:21.469 ************************************ 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:21.469 ************************************ 00:10:21.469 START TEST nvmf_zcopy 00:10:21.469 ************************************ 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:21.469 * Looking for test storage... 00:10:21.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:21.469 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:10:21.470 20:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:24.757 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:24.757 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:10:24.757 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:24.757 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:24.757 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:24.757 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:24.757 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:24.757 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:10:24.757 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:24.757 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:10:24.757 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:10:24.757 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:10:24.757 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:10:24.757 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:10:24.757 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:10:24.757 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:24.757 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:24.758 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:24.758 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:24.758 Found net devices under 0000:84:00.0: cvl_0_0 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:24.758 Found net devices under 0000:84:00.1: cvl_0_1 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:24.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:24.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:10:24.758 00:10:24.758 --- 10.0.0.2 ping statistics --- 00:10:24.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.758 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:24.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:24.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:10:24.758 00:10:24.758 --- 10.0.0.1 ping statistics --- 00:10:24.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.758 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1971914 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1971914 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1971914 ']' 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:24.758 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:24.758 [2024-07-24 20:04:28.420097] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:10:24.759 [2024-07-24 20:04:28.420274] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.759 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.017 [2024-07-24 20:04:28.544343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.017 [2024-07-24 20:04:28.686306] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.017 [2024-07-24 20:04:28.686383] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.017 [2024-07-24 20:04:28.686404] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.017 [2024-07-24 20:04:28.686421] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.017 [2024-07-24 20:04:28.686448] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.017 [2024-07-24 20:04:28.686488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.276 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.276 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:25.276 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:25.276 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:25.276 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:25.276 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.276 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:25.276 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:25.276 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.276 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:25.276 [2024-07-24 20:04:28.880522] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:25.276 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.276 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:25.276 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.276 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:25.276 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.276 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.276 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.276 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:25.277 [2024-07-24 20:04:28.896789] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.277 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.277 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:25.277 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.277 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:25.277 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.277 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:25.277 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.277 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:25.277 malloc0 00:10:25.277 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.277 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:25.277 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.277 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:25.277 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.277 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:25.277 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:25.277 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:25.277 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:25.277 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:25.277 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:25.277 { 00:10:25.277 "params": { 00:10:25.277 "name": "Nvme$subsystem", 00:10:25.277 "trtype": "$TEST_TRANSPORT", 00:10:25.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:25.277 "adrfam": "ipv4", 00:10:25.277 "trsvcid": "$NVMF_PORT", 00:10:25.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:25.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:25.277 "hdgst": ${hdgst:-false}, 00:10:25.277 "ddgst": ${ddgst:-false} 00:10:25.277 }, 00:10:25.277 "method": "bdev_nvme_attach_controller" 00:10:25.277 } 00:10:25.277 EOF 00:10:25.277 )") 00:10:25.277 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:25.277 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:25.277 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:25.277 20:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:25.277 "params": { 00:10:25.277 "name": "Nvme1", 00:10:25.277 "trtype": "tcp", 00:10:25.277 "traddr": "10.0.0.2", 00:10:25.277 "adrfam": "ipv4", 00:10:25.277 "trsvcid": "4420", 00:10:25.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:25.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:25.277 "hdgst": false, 00:10:25.277 "ddgst": false 00:10:25.277 }, 00:10:25.277 "method": "bdev_nvme_attach_controller" 00:10:25.277 }' 00:10:25.277 [2024-07-24 20:04:29.000983] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:10:25.277 [2024-07-24 20:04:29.001084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1972055 ] 00:10:25.277 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.536 [2024-07-24 20:04:29.086406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.536 [2024-07-24 20:04:29.232267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.103 Running I/O for 10 seconds... 00:10:36.074 00:10:36.074 Latency(us) 00:10:36.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.074 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:36.074 Verification LBA range: start 0x0 length 0x1000 00:10:36.074 Nvme1n1 : 10.02 4556.75 35.60 0.00 0.00 28005.95 940.56 38641.97 00:10:36.074 =================================================================================================================== 00:10:36.074 Total : 4556.75 35.60 0.00 0.00 28005.95 940.56 38641.97 00:10:36.333 20:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1973282 00:10:36.333 20:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:36.333 20:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.333 20:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:36.333 20:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:36.333 20:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:36.333 20:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:36.333 20:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:36.333 20:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:36.333 { 00:10:36.333 "params": { 00:10:36.333 "name": "Nvme$subsystem", 00:10:36.333 "trtype": "$TEST_TRANSPORT", 00:10:36.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:36.333 "adrfam": "ipv4", 00:10:36.333 "trsvcid": "$NVMF_PORT", 00:10:36.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:36.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:36.333 "hdgst": ${hdgst:-false}, 00:10:36.333 "ddgst": ${ddgst:-false} 00:10:36.333 }, 00:10:36.333 "method": "bdev_nvme_attach_controller" 00:10:36.333 } 00:10:36.333 EOF 00:10:36.333 )") 00:10:36.333 [2024-07-24 20:04:39.976655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.333 [2024-07-24 20:04:39.976715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.333 20:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:36.333 20:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:36.333 20:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:36.333 20:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:36.333 "params": { 00:10:36.333 "name": "Nvme1", 00:10:36.333 "trtype": "tcp", 00:10:36.333 "traddr": "10.0.0.2", 00:10:36.333 "adrfam": "ipv4", 00:10:36.333 "trsvcid": "4420", 00:10:36.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:36.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:36.333 "hdgst": false, 00:10:36.333 "ddgst": false 00:10:36.333 }, 00:10:36.333 "method": "bdev_nvme_attach_controller" 00:10:36.333 }' 00:10:36.333 [2024-07-24 20:04:39.984597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.333 [2024-07-24 20:04:39.984630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.333 [2024-07-24 20:04:39.992616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.333 [2024-07-24 20:04:39.992647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.333 [2024-07-24 20:04:40.000638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.333 [2024-07-24 20:04:40.000669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.333 [2024-07-24 20:04:40.008724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.333 [2024-07-24 20:04:40.008783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.333 [2024-07-24 20:04:40.016691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.333 [2024-07-24 20:04:40.016727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.333 [2024-07-24 20:04:40.021907] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:10:36.333 [2024-07-24 20:04:40.021999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1973282 ] 00:10:36.333 [2024-07-24 20:04:40.024710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.333 [2024-07-24 20:04:40.024743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.333 [2024-07-24 20:04:40.032730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.333 [2024-07-24 20:04:40.032761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.333 [2024-07-24 20:04:40.040752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.333 [2024-07-24 20:04:40.040784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.333 [2024-07-24 20:04:40.048774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.333 [2024-07-24 20:04:40.048804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.333 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.333 [2024-07-24 20:04:40.056795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.333 [2024-07-24 20:04:40.056825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.333 [2024-07-24 20:04:40.064819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.333 [2024-07-24 20:04:40.064850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.333 [2024-07-24 20:04:40.072843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.333 [2024-07-24 20:04:40.072873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.333 [2024-07-24 20:04:40.080865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.333 [2024-07-24 20:04:40.080895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.333 [2024-07-24 20:04:40.088887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.333 [2024-07-24 20:04:40.088917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.333 [2024-07-24 20:04:40.096909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.333 [2024-07-24 20:04:40.096938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.333 [2024-07-24 20:04:40.098717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.333 [2024-07-24 20:04:40.104948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.333 [2024-07-24 20:04:40.104983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.333 [2024-07-24 20:04:40.113000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.333 [2024-07-24 20:04:40.113050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.594 [2024-07-24 20:04:40.120992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.594 [2024-07-24 20:04:40.121025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.594 [2024-07-24 20:04:40.129013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.594 [2024-07-24 20:04:40.129043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.594 [2024-07-24 20:04:40.137031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.594 [2024-07-24 20:04:40.137060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.594 [2024-07-24 20:04:40.145056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.594 [2024-07-24 20:04:40.145087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.594 [2024-07-24 20:04:40.153078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.594 [2024-07-24 20:04:40.153108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.594 [2024-07-24 20:04:40.161101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.594 [2024-07-24 20:04:40.161131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.594 [2024-07-24 20:04:40.169124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.169154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.177168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.177206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.189233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.189282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.197201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.197230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.205223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.205252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.213245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.213289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.221269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.221299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.229291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.229321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.237320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.237352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.243123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.595 [2024-07-24 20:04:40.245340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.245371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.253363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.253394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.261415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.261477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.269445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.269494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.277479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.277518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.285493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.285533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.293514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.293553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.301529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.301565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.309548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.309588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.317563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.317601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.325567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.325597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.333607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.333645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.341637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.341674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.349657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.349695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.357660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.357690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.365685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.365715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.595 [2024-07-24 20:04:40.373721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.595 [2024-07-24 20:04:40.373756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 [2024-07-24 20:04:40.381741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.381776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 [2024-07-24 20:04:40.389759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.389791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 [2024-07-24 20:04:40.397784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.397815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 [2024-07-24 20:04:40.405805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.405836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 [2024-07-24 20:04:40.413826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.413855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 [2024-07-24 20:04:40.421855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.421884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 [2024-07-24 20:04:40.429879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.429909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 [2024-07-24 20:04:40.437902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.437932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 [2024-07-24 20:04:40.445934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.445968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 [2024-07-24 20:04:40.453956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.453990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 [2024-07-24 20:04:40.461979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.462013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 [2024-07-24 20:04:40.469994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.470024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 [2024-07-24 20:04:40.478027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.478063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 Running I/O for 5 seconds... 00:10:36.873 [2024-07-24 20:04:40.486042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.486072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 [2024-07-24 20:04:40.497869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.497909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 [2024-07-24 20:04:40.510025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.510062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 [2024-07-24 20:04:40.524530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.524576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 [2024-07-24 20:04:40.538198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.538236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 [2024-07-24 20:04:40.552583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.552621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 [2024-07-24 20:04:40.566462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.566500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 [2024-07-24 20:04:40.581239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.581276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 [2024-07-24 20:04:40.595101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.595142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 [2024-07-24 20:04:40.609684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.609722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 [2024-07-24 20:04:40.623670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.623707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.873 [2024-07-24 20:04:40.637667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.873 [2024-07-24 20:04:40.637704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.148 [2024-07-24 20:04:40.651474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.148 [2024-07-24 20:04:40.651510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.148 [2024-07-24 20:04:40.666243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.148 [2024-07-24 20:04:40.666287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.148 [2024-07-24 20:04:40.679625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.148 [2024-07-24 20:04:40.679663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.148 [2024-07-24 20:04:40.693584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.148 [2024-07-24 20:04:40.693621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.148 [2024-07-24 20:04:40.707955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.148 [2024-07-24 20:04:40.707992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.149 [2024-07-24 20:04:40.722493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.149 [2024-07-24 20:04:40.722537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.149 [2024-07-24 20:04:40.737081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.149 [2024-07-24 20:04:40.737125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.149 [2024-07-24 20:04:40.750847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.149 [2024-07-24 20:04:40.750884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.149 [2024-07-24 20:04:40.765473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.149 [2024-07-24 20:04:40.765510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.149 [2024-07-24 20:04:40.779525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.149 [2024-07-24 20:04:40.779562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.149 [2024-07-24 20:04:40.793906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.149 [2024-07-24 20:04:40.793953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.149 [2024-07-24 20:04:40.808635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.149 [2024-07-24 20:04:40.808672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.149 [2024-07-24 20:04:40.822656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.149 [2024-07-24 20:04:40.822693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.149 [2024-07-24 20:04:40.836464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.149 [2024-07-24 20:04:40.836501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.149 [2024-07-24 20:04:40.850319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.149 [2024-07-24 20:04:40.850355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.149 [2024-07-24 20:04:40.864365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.149 [2024-07-24 20:04:40.864402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.149 [2024-07-24 20:04:40.878614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.149 [2024-07-24 20:04:40.878651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.149 [2024-07-24 20:04:40.892941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.149 [2024-07-24 20:04:40.892978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.149 [2024-07-24 20:04:40.906847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.149 [2024-07-24 20:04:40.906884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.149 [2024-07-24 20:04:40.920521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.149 [2024-07-24 20:04:40.920559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.407 [2024-07-24 20:04:40.934577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.407 [2024-07-24 20:04:40.934614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.407 [2024-07-24 20:04:40.948475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.407 [2024-07-24 20:04:40.948512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.407 [2024-07-24 20:04:40.962686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.407 [2024-07-24 20:04:40.962723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.407 [2024-07-24 20:04:40.976967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.407 [2024-07-24 20:04:40.977004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.407 [2024-07-24 20:04:40.991539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.407 [2024-07-24 20:04:40.991576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.407 [2024-07-24 20:04:41.005338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.407 [2024-07-24 20:04:41.005376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.407 [2024-07-24 20:04:41.019610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.407 [2024-07-24 20:04:41.019647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.407 [2024-07-24 20:04:41.033672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.407 [2024-07-24 20:04:41.033709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.407 [2024-07-24 20:04:41.048250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.407 [2024-07-24 20:04:41.048287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.407 [2024-07-24 20:04:41.062743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.407 [2024-07-24 20:04:41.062792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.407 [2024-07-24 20:04:41.077085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.407 [2024-07-24 20:04:41.077124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.407 [2024-07-24 20:04:41.091156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.407 [2024-07-24 20:04:41.091205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.407 [2024-07-24 20:04:41.105214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.407 [2024-07-24 20:04:41.105257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.407 [2024-07-24 20:04:41.119544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.407 [2024-07-24 20:04:41.119584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.407 [2024-07-24 20:04:41.133403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.408 [2024-07-24 20:04:41.133453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.408 [2024-07-24 20:04:41.147477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.408 [2024-07-24 20:04:41.147515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.408 [2024-07-24 20:04:41.161622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.408 [2024-07-24 20:04:41.161661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.408 [2024-07-24 20:04:41.175531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.408 [2024-07-24 20:04:41.175569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.408 [2024-07-24 20:04:41.189743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.408 [2024-07-24 20:04:41.189784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.666 [2024-07-24 20:04:41.204617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.666 [2024-07-24 20:04:41.204655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.666 [2024-07-24 20:04:41.218194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.666 [2024-07-24 20:04:41.218233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.666 [2024-07-24 20:04:41.231922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.666 [2024-07-24 20:04:41.231960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.666 [2024-07-24 20:04:41.245485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.666 [2024-07-24 20:04:41.245523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.666 [2024-07-24 20:04:41.259848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.666 [2024-07-24 20:04:41.259891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.666 [2024-07-24 20:04:41.274331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.666 [2024-07-24 20:04:41.274369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.666 [2024-07-24 20:04:41.288610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.666 [2024-07-24 20:04:41.288648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.666 [2024-07-24 20:04:41.302866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.666 [2024-07-24 20:04:41.302904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.666 [2024-07-24 20:04:41.316624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.666 [2024-07-24 20:04:41.316662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.666 [2024-07-24 20:04:41.330722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.666 [2024-07-24 20:04:41.330759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.666 [2024-07-24 20:04:41.344825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.666 [2024-07-24 20:04:41.344864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.666 [2024-07-24 20:04:41.358919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.666 [2024-07-24 20:04:41.358957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.666 [2024-07-24 20:04:41.373047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.666 [2024-07-24 20:04:41.373084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.666 [2024-07-24 20:04:41.387576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.666 [2024-07-24 20:04:41.387614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.666 [2024-07-24 20:04:41.401772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.666 [2024-07-24 20:04:41.401810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.666 [2024-07-24 20:04:41.415876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.666 [2024-07-24 20:04:41.415914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.666 [2024-07-24 20:04:41.429964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.666 [2024-07-24 20:04:41.430002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.666 [2024-07-24 20:04:41.444217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.666 [2024-07-24 20:04:41.444264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.925 [2024-07-24 20:04:41.458922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.925 [2024-07-24 20:04:41.458960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.925 [2024-07-24 20:04:41.474125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.925 [2024-07-24 20:04:41.474163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.925 [2024-07-24 20:04:41.488126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.925 [2024-07-24 20:04:41.488164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.925 [2024-07-24 20:04:41.502199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.925 [2024-07-24 20:04:41.502237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.925 [2024-07-24 20:04:41.516297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.925 [2024-07-24 20:04:41.516336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.925 [2024-07-24 20:04:41.530482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.925 [2024-07-24 20:04:41.530519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.925 [2024-07-24 20:04:41.544669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.925 [2024-07-24 20:04:41.544706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.925 [2024-07-24 20:04:41.558538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.925 [2024-07-24 20:04:41.558580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.925 [2024-07-24 20:04:41.572465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.925 [2024-07-24 20:04:41.572503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.925 [2024-07-24 20:04:41.586949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.925 [2024-07-24 20:04:41.586987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.925 [2024-07-24 20:04:41.601811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.925 [2024-07-24 20:04:41.601849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.925 [2024-07-24 20:04:41.615946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.925 [2024-07-24 20:04:41.615983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.925 [2024-07-24 20:04:41.630305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.925 [2024-07-24 20:04:41.630343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.925 [2024-07-24 20:04:41.644351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.925 [2024-07-24 20:04:41.644389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.925 [2024-07-24 20:04:41.658628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.925 [2024-07-24 20:04:41.658665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.925 [2024-07-24 20:04:41.672676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.925 [2024-07-24 20:04:41.672713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.925 [2024-07-24 20:04:41.686665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.925 [2024-07-24 20:04:41.686703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.925 [2024-07-24 20:04:41.701231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.925 [2024-07-24 20:04:41.701268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.184 [2024-07-24 20:04:41.715676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.184 [2024-07-24 20:04:41.715715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.184 [2024-07-24 20:04:41.730239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.184 [2024-07-24 20:04:41.730277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.184 [2024-07-24 20:04:41.743852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.184 [2024-07-24 20:04:41.743889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.185 [2024-07-24 20:04:41.758004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.185 [2024-07-24 20:04:41.758041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.185 [2024-07-24 20:04:41.772264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.185 [2024-07-24 20:04:41.772302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.185 [2024-07-24 20:04:41.786639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.185 [2024-07-24 20:04:41.786676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.185 [2024-07-24 20:04:41.801048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.185 [2024-07-24 20:04:41.801092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.185 [2024-07-24 20:04:41.815395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.185 [2024-07-24 20:04:41.815445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.185 [2024-07-24 20:04:41.829619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.185 [2024-07-24 20:04:41.829657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.185 [2024-07-24 20:04:41.844236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.185 [2024-07-24 20:04:41.844273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.185 [2024-07-24 20:04:41.858686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.185 [2024-07-24 20:04:41.858732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.185 [2024-07-24 20:04:41.872599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.185 [2024-07-24 20:04:41.872637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.185 [2024-07-24 20:04:41.886505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.185 [2024-07-24 20:04:41.886542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.185 [2024-07-24 20:04:41.900736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.185 [2024-07-24 20:04:41.900773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.185 [2024-07-24 20:04:41.915373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.185 [2024-07-24 20:04:41.915410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.185 [2024-07-24 20:04:41.930221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.185 [2024-07-24 20:04:41.930260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.185 [2024-07-24 20:04:41.942845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.185 [2024-07-24 20:04:41.942884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.185 [2024-07-24 20:04:41.956275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.185 [2024-07-24 20:04:41.956312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.444 [2024-07-24 20:04:41.970553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.444 [2024-07-24 20:04:41.970589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.444 [2024-07-24 20:04:41.984714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.444 [2024-07-24 20:04:41.984757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.444 [2024-07-24 20:04:41.999343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.444 [2024-07-24 20:04:41.999380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.444 [2024-07-24 20:04:42.013822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.444 [2024-07-24 20:04:42.013859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.444 [2024-07-24 20:04:42.028126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.444 [2024-07-24 20:04:42.028164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.444 [2024-07-24 20:04:42.042250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.444 [2024-07-24 20:04:42.042288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.444 [2024-07-24 20:04:42.056698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.444 [2024-07-24 20:04:42.056735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.444 [2024-07-24 20:04:42.070883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.444 [2024-07-24 20:04:42.070921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.444 [2024-07-24 20:04:42.084770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.444 [2024-07-24 20:04:42.084807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.444 [2024-07-24 20:04:42.099268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.444 [2024-07-24 20:04:42.099306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.444 [2024-07-24 20:04:42.113749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.444 [2024-07-24 20:04:42.113786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.444 [2024-07-24 20:04:42.127554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.444 [2024-07-24 20:04:42.127599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.444 [2024-07-24 20:04:42.141749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.444 [2024-07-24 20:04:42.141789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.444 [2024-07-24 20:04:42.156080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.444 [2024-07-24 20:04:42.156122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.444 [2024-07-24 20:04:42.169967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.444 [2024-07-24 20:04:42.170005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.444 [2024-07-24 20:04:42.184403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.444 [2024-07-24 20:04:42.184452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.444 [2024-07-24 20:04:42.198591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.444 [2024-07-24 20:04:42.198628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.444 [2024-07-24 20:04:42.212452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.444 [2024-07-24 20:04:42.212490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.444 [2024-07-24 20:04:42.227019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.444 [2024-07-24 20:04:42.227056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.703 [2024-07-24 20:04:42.241807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.703 [2024-07-24 20:04:42.241845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.703 [2024-07-24 20:04:42.255841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.703 [2024-07-24 20:04:42.255879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.703 [2024-07-24 20:04:42.269805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.703 [2024-07-24 20:04:42.269843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.703 [2024-07-24 20:04:42.283789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.703 [2024-07-24 20:04:42.283827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.703 [2024-07-24 20:04:42.298603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.703 [2024-07-24 20:04:42.298641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.703 [2024-07-24 20:04:42.312837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.703 [2024-07-24 20:04:42.312875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.703 [2024-07-24 20:04:42.326787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.703 [2024-07-24 20:04:42.326826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.703 [2024-07-24 20:04:42.341156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.703 [2024-07-24 20:04:42.341194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.703 [2024-07-24 20:04:42.355459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.703 [2024-07-24 20:04:42.355497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.703 [2024-07-24 20:04:42.369367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.703 [2024-07-24 20:04:42.369404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.703 [2024-07-24 20:04:42.383050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.703 [2024-07-24 20:04:42.383086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.703 [2024-07-24 20:04:42.397156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.703 [2024-07-24 20:04:42.397204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.703 [2024-07-24 20:04:42.411854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.703 [2024-07-24 20:04:42.411896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.703 [2024-07-24 20:04:42.426535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.703 [2024-07-24 20:04:42.426573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.703 [2024-07-24 20:04:42.441129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.703 [2024-07-24 20:04:42.441166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.703 [2024-07-24 20:04:42.455630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.703 [2024-07-24 20:04:42.455668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.703 [2024-07-24 20:04:42.470206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.703 [2024-07-24 20:04:42.470243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.703 [2024-07-24 20:04:42.484727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.703 [2024-07-24 20:04:42.484763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.962 [2024-07-24 20:04:42.498381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.962 [2024-07-24 20:04:42.498420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.962 [2024-07-24 20:04:42.512583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.962 [2024-07-24 20:04:42.512620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.962 [2024-07-24 20:04:42.527047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.962 [2024-07-24 20:04:42.527084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.962 [2024-07-24 20:04:42.541053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.962 [2024-07-24 20:04:42.541091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.962 [2024-07-24 20:04:42.555256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.962 [2024-07-24 20:04:42.555293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.962 [2024-07-24 20:04:42.569280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.962 [2024-07-24 20:04:42.569316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.962 [2024-07-24 20:04:42.583788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.962 [2024-07-24 20:04:42.583825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.962 [2024-07-24 20:04:42.598073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.962 [2024-07-24 20:04:42.598111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.962 [2024-07-24 20:04:42.612336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.962 [2024-07-24 20:04:42.612374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.962 [2024-07-24 20:04:42.626457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.962 [2024-07-24 20:04:42.626494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.962 [2024-07-24 20:04:42.640449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.962 [2024-07-24 20:04:42.640488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.962 [2024-07-24 20:04:42.654411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.962 [2024-07-24 20:04:42.654469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.962 [2024-07-24 20:04:42.668606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.962 [2024-07-24 20:04:42.668659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.962 [2024-07-24 20:04:42.682570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.962 [2024-07-24 20:04:42.682607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.962 [2024-07-24 20:04:42.697265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.962 [2024-07-24 20:04:42.697303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.962 [2024-07-24 20:04:42.711270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.962 [2024-07-24 20:04:42.711313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.962 [2024-07-24 20:04:42.725811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.962 [2024-07-24 20:04:42.725854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.962 [2024-07-24 20:04:42.739562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.962 [2024-07-24 20:04:42.739608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.221 [2024-07-24 20:04:42.753865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.221 [2024-07-24 20:04:42.753902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.221 [2024-07-24 20:04:42.767872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.221 [2024-07-24 20:04:42.767909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.221 [2024-07-24 20:04:42.782619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.221 [2024-07-24 20:04:42.782657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.221 [2024-07-24 20:04:42.796859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.221 [2024-07-24 20:04:42.796896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.221 [2024-07-24 20:04:42.810899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.221 [2024-07-24 20:04:42.810937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.221 [2024-07-24 20:04:42.825779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.221 [2024-07-24 20:04:42.825817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.221 [2024-07-24 20:04:42.839622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.221 [2024-07-24 20:04:42.839659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.221 [2024-07-24 20:04:42.853578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.221 [2024-07-24 20:04:42.853616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.221 [2024-07-24 20:04:42.867708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.221 [2024-07-24 20:04:42.867746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.221 [2024-07-24 20:04:42.881704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.221 [2024-07-24 20:04:42.881741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.221 [2024-07-24 20:04:42.895725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.221 [2024-07-24 20:04:42.895762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.221 [2024-07-24 20:04:42.909439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.221 [2024-07-24 20:04:42.909476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.221 [2024-07-24 20:04:42.923853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.221 [2024-07-24 20:04:42.923890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.221 [2024-07-24 20:04:42.937706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.221 [2024-07-24 20:04:42.937752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.221 [2024-07-24 20:04:42.951739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.221 [2024-07-24 20:04:42.951778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.221 [2024-07-24 20:04:42.966187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.221 [2024-07-24 20:04:42.966224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.221 [2024-07-24 20:04:42.980650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.221 [2024-07-24 20:04:42.980687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.221 [2024-07-24 20:04:42.994897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.221 [2024-07-24 20:04:42.994934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.480 [2024-07-24 20:04:43.009146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.480 [2024-07-24 20:04:43.009199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.480 [2024-07-24 20:04:43.024107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.480 [2024-07-24 20:04:43.024143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.480 [2024-07-24 20:04:43.038002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.480 [2024-07-24 20:04:43.038039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.480 [2024-07-24 20:04:43.052006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.480 [2024-07-24 20:04:43.052043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.480 [2024-07-24 20:04:43.065785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.480 [2024-07-24 20:04:43.065822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.480 [2024-07-24 20:04:43.079681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.480 [2024-07-24 20:04:43.079718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.480 [2024-07-24 20:04:43.093998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.480 [2024-07-24 20:04:43.094035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.480 [2024-07-24 20:04:43.108693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.480 [2024-07-24 20:04:43.108730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.480 [2024-07-24 20:04:43.123205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.480 [2024-07-24 20:04:43.123243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.480 [2024-07-24 20:04:43.137788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.480 [2024-07-24 20:04:43.137826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.480 [2024-07-24 20:04:43.151702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.480 [2024-07-24 20:04:43.151739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.480 [2024-07-24 20:04:43.165266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.480 [2024-07-24 20:04:43.165302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.480 [2024-07-24 20:04:43.179682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.480 [2024-07-24 20:04:43.179719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.480 [2024-07-24 20:04:43.193950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.480 [2024-07-24 20:04:43.193996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.480 [2024-07-24 20:04:43.208263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.480 [2024-07-24 20:04:43.208310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.480 [2024-07-24 20:04:43.223093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.480 [2024-07-24 20:04:43.223131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.480 [2024-07-24 20:04:43.236003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.480 [2024-07-24 20:04:43.236052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.480 [2024-07-24 20:04:43.250222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.480 [2024-07-24 20:04:43.250260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.480 [2024-07-24 20:04:43.264238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.480 [2024-07-24 20:04:43.264276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.748 [2024-07-24 20:04:43.278227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.749 [2024-07-24 20:04:43.278276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.749 [2024-07-24 20:04:43.292578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.749 [2024-07-24 20:04:43.292615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.749 [2024-07-24 20:04:43.306207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.749 [2024-07-24 20:04:43.306244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.749 [2024-07-24 20:04:43.319895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.749 [2024-07-24 20:04:43.319938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.749 [2024-07-24 20:04:43.333471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.749 [2024-07-24 20:04:43.333509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.749 [2024-07-24 20:04:43.347535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.749 [2024-07-24 20:04:43.347572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.749 [2024-07-24 20:04:43.361596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.749 [2024-07-24 20:04:43.361633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.749 [2024-07-24 20:04:43.375494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.749 [2024-07-24 20:04:43.375531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.749 [2024-07-24 20:04:43.389670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.749 [2024-07-24 20:04:43.389708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.749 [2024-07-24 20:04:43.403786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.749 [2024-07-24 20:04:43.403824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.749 [2024-07-24 20:04:43.417713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.749 [2024-07-24 20:04:43.417751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.749 [2024-07-24 20:04:43.431415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.749 [2024-07-24 20:04:43.431464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.749 [2024-07-24 20:04:43.446203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.749 [2024-07-24 20:04:43.446241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.749 [2024-07-24 20:04:43.459906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.749 [2024-07-24 20:04:43.459944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.749 [2024-07-24 20:04:43.474416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.749 [2024-07-24 20:04:43.474466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.749 [2024-07-24 20:04:43.488873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.749 [2024-07-24 20:04:43.488912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.749 [2024-07-24 20:04:43.503289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.749 [2024-07-24 20:04:43.503327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.749 [2024-07-24 20:04:43.517911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.749 [2024-07-24 20:04:43.517949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.013 [2024-07-24 20:04:43.532781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.013 [2024-07-24 20:04:43.532819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.013 [2024-07-24 20:04:43.547689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.013 [2024-07-24 20:04:43.547727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.013 [2024-07-24 20:04:43.561613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.013 [2024-07-24 20:04:43.561650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.013 [2024-07-24 20:04:43.575829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.013 [2024-07-24 20:04:43.575866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.013 [2024-07-24 20:04:43.590386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.013 [2024-07-24 20:04:43.590423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.013 [2024-07-24 20:04:43.604878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.013 [2024-07-24 20:04:43.604916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.013 [2024-07-24 20:04:43.619145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.013 [2024-07-24 20:04:43.619182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.013 [2024-07-24 20:04:43.633743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.013 [2024-07-24 20:04:43.633781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.013 [2024-07-24 20:04:43.648169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.013 [2024-07-24 20:04:43.648207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.013 [2024-07-24 20:04:43.662825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.013 [2024-07-24 20:04:43.662863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.013 [2024-07-24 20:04:43.677295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.013 [2024-07-24 20:04:43.677333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.013 [2024-07-24 20:04:43.691366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.013 [2024-07-24 20:04:43.691403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.013 [2024-07-24 20:04:43.705219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.013 [2024-07-24 20:04:43.705256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.013 [2024-07-24 20:04:43.719324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.013 [2024-07-24 20:04:43.719368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.013 [2024-07-24 20:04:43.732853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.013 [2024-07-24 20:04:43.732890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.013 [2024-07-24 20:04:43.747090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.013 [2024-07-24 20:04:43.747128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.013 [2024-07-24 20:04:43.761048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.013 [2024-07-24 20:04:43.761085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.013 [2024-07-24 20:04:43.775364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.013 [2024-07-24 20:04:43.775401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.013 [2024-07-24 20:04:43.788814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.013 [2024-07-24 20:04:43.788852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.272 [2024-07-24 20:04:43.803450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.272 [2024-07-24 20:04:43.803487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.272 [2024-07-24 20:04:43.818173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.272 [2024-07-24 20:04:43.818210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.272 [2024-07-24 20:04:43.832603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.272 [2024-07-24 20:04:43.832640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.272 [2024-07-24 20:04:43.847153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.272 [2024-07-24 20:04:43.847190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.272 [2024-07-24 20:04:43.861739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.272 [2024-07-24 20:04:43.861776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.272 [2024-07-24 20:04:43.876103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.272 [2024-07-24 20:04:43.876148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.272 [2024-07-24 20:04:43.890384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.272 [2024-07-24 20:04:43.890422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.272 [2024-07-24 20:04:43.904382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.272 [2024-07-24 20:04:43.904420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.273 [2024-07-24 20:04:43.918755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.273 [2024-07-24 20:04:43.918792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.273 [2024-07-24 20:04:43.933030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.273 [2024-07-24 20:04:43.933068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.273 [2024-07-24 20:04:43.947288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.273 [2024-07-24 20:04:43.947325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.273 [2024-07-24 20:04:43.961133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.273 [2024-07-24 20:04:43.961173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.273 [2024-07-24 20:04:43.975593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.273 [2024-07-24 20:04:43.975630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.273 [2024-07-24 20:04:43.990090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.273 [2024-07-24 20:04:43.990128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.273 [2024-07-24 20:04:44.004234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.273 [2024-07-24 20:04:44.004271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.273 [2024-07-24 20:04:44.018395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.273 [2024-07-24 20:04:44.018443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.273 [2024-07-24 20:04:44.031979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.273 [2024-07-24 20:04:44.032016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.273 [2024-07-24 20:04:44.045168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.273 [2024-07-24 20:04:44.045206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.531 [2024-07-24 20:04:44.059539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.531 [2024-07-24 20:04:44.059575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.531 [2024-07-24 20:04:44.073949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.531 [2024-07-24 20:04:44.073987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.531 [2024-07-24 20:04:44.088243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.531 [2024-07-24 20:04:44.088280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.531 [2024-07-24 20:04:44.102057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.531 [2024-07-24 20:04:44.102095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.531 [2024-07-24 20:04:44.116598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.531 [2024-07-24 20:04:44.116635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.531 [2024-07-24 20:04:44.130817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.531 [2024-07-24 20:04:44.130853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.531 [2024-07-24 20:04:44.144531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.531 [2024-07-24 20:04:44.144569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.531 [2024-07-24 20:04:44.158308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.531 [2024-07-24 20:04:44.158346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.531 [2024-07-24 20:04:44.172705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.532 [2024-07-24 20:04:44.172741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.532 [2024-07-24 20:04:44.186797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.532 [2024-07-24 20:04:44.186834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.532 [2024-07-24 20:04:44.201012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.532 [2024-07-24 20:04:44.201049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.532 [2024-07-24 20:04:44.214977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.532 [2024-07-24 20:04:44.215014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.532 [2024-07-24 20:04:44.229251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.532 [2024-07-24 20:04:44.229288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.532 [2024-07-24 20:04:44.243448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.532 [2024-07-24 20:04:44.243485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.532 [2024-07-24 20:04:44.257669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.532 [2024-07-24 20:04:44.257706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.532 [2024-07-24 20:04:44.271843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.532 [2024-07-24 20:04:44.271886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.532 [2024-07-24 20:04:44.286145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.532 [2024-07-24 20:04:44.286182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.532 [2024-07-24 20:04:44.300096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.532 [2024-07-24 20:04:44.300133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.532 [2024-07-24 20:04:44.314363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.532 [2024-07-24 20:04:44.314400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.790 [2024-07-24 20:04:44.327980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.790 [2024-07-24 20:04:44.328018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.790 [2024-07-24 20:04:44.341803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.790 [2024-07-24 20:04:44.341841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.790 [2024-07-24 20:04:44.355806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.791 [2024-07-24 20:04:44.355843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.791 [2024-07-24 20:04:44.370017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.791 [2024-07-24 20:04:44.370055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.791 [2024-07-24 20:04:44.384036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.791 [2024-07-24 20:04:44.384074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.791 [2024-07-24 20:04:44.398579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.791 [2024-07-24 20:04:44.398616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.791 [2024-07-24 20:04:44.412688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.791 [2024-07-24 20:04:44.412725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.791 [2024-07-24 20:04:44.426731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.791 [2024-07-24 20:04:44.426768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.791 [2024-07-24 20:04:44.440837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.791 [2024-07-24 20:04:44.440875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.791 [2024-07-24 20:04:44.455320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.791 [2024-07-24 20:04:44.455358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.791 [2024-07-24 20:04:44.469631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.791 [2024-07-24 20:04:44.469667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.791 [2024-07-24 20:04:44.483942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.791 [2024-07-24 20:04:44.483979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.791 [2024-07-24 20:04:44.498153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.791 [2024-07-24 20:04:44.498190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.791 [2024-07-24 20:04:44.512846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.791 [2024-07-24 20:04:44.512883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.791 [2024-07-24 20:04:44.526854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.791 [2024-07-24 20:04:44.526893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.791 [2024-07-24 20:04:44.540989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.791 [2024-07-24 20:04:44.541035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.791 [2024-07-24 20:04:44.555153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.791 [2024-07-24 20:04:44.555190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.791 [2024-07-24 20:04:44.569462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.791 [2024-07-24 20:04:44.569500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.050 [2024-07-24 20:04:44.584199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.050 [2024-07-24 20:04:44.584237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.050 [2024-07-24 20:04:44.598747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.050 [2024-07-24 20:04:44.598784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.050 [2024-07-24 20:04:44.613128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.050 [2024-07-24 20:04:44.613165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.050 [2024-07-24 20:04:44.627423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.050 [2024-07-24 20:04:44.627473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.050 [2024-07-24 20:04:44.642126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.050 [2024-07-24 20:04:44.642163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.050 [2024-07-24 20:04:44.656298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.050 [2024-07-24 20:04:44.656335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.050 [2024-07-24 20:04:44.670742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.050 [2024-07-24 20:04:44.670780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.050 [2024-07-24 20:04:44.685205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.050 [2024-07-24 20:04:44.685243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.050 [2024-07-24 20:04:44.699652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.050 [2024-07-24 20:04:44.699698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.050 [2024-07-24 20:04:44.713759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.050 [2024-07-24 20:04:44.713796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.050 [2024-07-24 20:04:44.728014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.050 [2024-07-24 20:04:44.728052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.050 [2024-07-24 20:04:44.742815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.050 [2024-07-24 20:04:44.742852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.050 [2024-07-24 20:04:44.757316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.050 [2024-07-24 20:04:44.757355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.050 [2024-07-24 20:04:44.771399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.050 [2024-07-24 20:04:44.771448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.050 [2024-07-24 20:04:44.785591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.050 [2024-07-24 20:04:44.785629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.050 [2024-07-24 20:04:44.799514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.050 [2024-07-24 20:04:44.799551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.050 [2024-07-24 20:04:44.813891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.050 [2024-07-24 20:04:44.813940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.050 [2024-07-24 20:04:44.828056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.050 [2024-07-24 20:04:44.828099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.309 [2024-07-24 20:04:44.842488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.309 [2024-07-24 20:04:44.842536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.309 [2024-07-24 20:04:44.856835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.309 [2024-07-24 20:04:44.856871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.309 [2024-07-24 20:04:44.871184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.309 [2024-07-24 20:04:44.871230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.309 [2024-07-24 20:04:44.885343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.309 [2024-07-24 20:04:44.885380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.309 [2024-07-24 20:04:44.899204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.309 [2024-07-24 20:04:44.899242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.309 [2024-07-24 20:04:44.913408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.309 [2024-07-24 20:04:44.913457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.309 [2024-07-24 20:04:44.927698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.309 [2024-07-24 20:04:44.927735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.309 [2024-07-24 20:04:44.941927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.309 [2024-07-24 20:04:44.941964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.309 [2024-07-24 20:04:44.955943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.309 [2024-07-24 20:04:44.955990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.309 [2024-07-24 20:04:44.969830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.309 [2024-07-24 20:04:44.969868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.309 [2024-07-24 20:04:44.984100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.309 [2024-07-24 20:04:44.984138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.309 [2024-07-24 20:04:44.998220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.309 [2024-07-24 20:04:44.998260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.309 [2024-07-24 20:04:45.013011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.309 [2024-07-24 20:04:45.013049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.309 [2024-07-24 20:04:45.027326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.309 [2024-07-24 20:04:45.027367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.309 [2024-07-24 20:04:45.041557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.309 [2024-07-24 20:04:45.041594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.309 [2024-07-24 20:04:45.055271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.309 [2024-07-24 20:04:45.055315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.309 [2024-07-24 20:04:45.069025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.309 [2024-07-24 20:04:45.069063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.309 [2024-07-24 20:04:45.083687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.309 [2024-07-24 20:04:45.083739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.567 [2024-07-24 20:04:45.098548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.567 [2024-07-24 20:04:45.098585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.567 [2024-07-24 20:04:45.112036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.567 [2024-07-24 20:04:45.112073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.567 [2024-07-24 20:04:45.126409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.567 [2024-07-24 20:04:45.126462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.567 [2024-07-24 20:04:45.140378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.567 [2024-07-24 20:04:45.140415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.567 [2024-07-24 20:04:45.154640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.568 [2024-07-24 20:04:45.154683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.568 [2024-07-24 20:04:45.168949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.568 [2024-07-24 20:04:45.168995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.568 [2024-07-24 20:04:45.182689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.568 [2024-07-24 20:04:45.182726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.568 [2024-07-24 20:04:45.196947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.568 [2024-07-24 20:04:45.196984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.568 [2024-07-24 20:04:45.211092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.568 [2024-07-24 20:04:45.211128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.568 [2024-07-24 20:04:45.225783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.568 [2024-07-24 20:04:45.225819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.568 [2024-07-24 20:04:45.240455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.568 [2024-07-24 20:04:45.240491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.568 [2024-07-24 20:04:45.254808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.568 [2024-07-24 20:04:45.254845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.568 [2024-07-24 20:04:45.268855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.568 [2024-07-24 20:04:45.268892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.568 [2024-07-24 20:04:45.282828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.568 [2024-07-24 20:04:45.282868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.568 [2024-07-24 20:04:45.297116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.568 [2024-07-24 20:04:45.297152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.568 [2024-07-24 20:04:45.311289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.568 [2024-07-24 20:04:45.311326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.568 [2024-07-24 20:04:45.325366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.568 [2024-07-24 20:04:45.325404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.568 [2024-07-24 20:04:45.339376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.568 [2024-07-24 20:04:45.339414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.826 [2024-07-24 20:04:45.353988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.826 [2024-07-24 20:04:45.354036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.826 [2024-07-24 20:04:45.368065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.826 [2024-07-24 20:04:45.368102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.826 [2024-07-24 20:04:45.381684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.826 [2024-07-24 20:04:45.381721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.826 [2024-07-24 20:04:45.395402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.826 [2024-07-24 20:04:45.395449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.826 [2024-07-24 20:04:45.409720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.826 [2024-07-24 20:04:45.409757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.826 [2024-07-24 20:04:45.423973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.826 [2024-07-24 20:04:45.424011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.826 [2024-07-24 20:04:45.438225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.826 [2024-07-24 20:04:45.438262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.826 [2024-07-24 20:04:45.452336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.826 [2024-07-24 20:04:45.452373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.826 [2024-07-24 20:04:45.466398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.826 [2024-07-24 20:04:45.466444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.826 [2024-07-24 20:04:45.481057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.826 [2024-07-24 20:04:45.481093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.826 [2024-07-24 20:04:45.495508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.826 [2024-07-24 20:04:45.495544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.826 [2024-07-24 20:04:45.508642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.826 [2024-07-24 20:04:45.508678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.826 [2024-07-24 20:04:45.549809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.826 [2024-07-24 20:04:45.549843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.826 00:10:41.826 Latency(us) 00:10:41.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:41.826 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:41.826 Nvme1n1 : 5.05 8863.05 69.24 0.00 0.00 14303.67 6310.87 54370.61 00:10:41.826 =================================================================================================================== 00:10:41.826 Total : 8863.05 69.24 0.00 0.00 14303.67 6310.87 54370.61 00:10:41.826 [2024-07-24 20:04:45.554714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.826 [2024-07-24 20:04:45.554746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.826 [2024-07-24 20:04:45.562739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.826 [2024-07-24 20:04:45.562774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.826 [2024-07-24 20:04:45.570751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.826 [2024-07-24 20:04:45.570783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.826 [2024-07-24 20:04:45.578781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.826 [2024-07-24 20:04:45.578816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.826 [2024-07-24 20:04:45.586845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.826 [2024-07-24 20:04:45.586897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.827 [2024-07-24 20:04:45.594872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.827 [2024-07-24 20:04:45.594924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.827 [2024-07-24 20:04:45.602899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.827 [2024-07-24 20:04:45.602951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.827 [2024-07-24 20:04:45.610910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.827 [2024-07-24 20:04:45.610959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.085 [2024-07-24 20:04:45.618937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.085 [2024-07-24 20:04:45.618987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.085 [2024-07-24 20:04:45.626969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.085 [2024-07-24 20:04:45.627023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.085 [2024-07-24 20:04:45.634987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.085 [2024-07-24 20:04:45.635041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.085 [2024-07-24 20:04:45.643003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.085 [2024-07-24 20:04:45.643051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.085 [2024-07-24 20:04:45.651030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.085 [2024-07-24 20:04:45.651084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.085 [2024-07-24 20:04:45.659055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.085 [2024-07-24 20:04:45.659109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.085 [2024-07-24 20:04:45.667078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.085 [2024-07-24 20:04:45.667129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.085 [2024-07-24 20:04:45.675108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.085 [2024-07-24 20:04:45.675162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.085 [2024-07-24 20:04:45.683127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.085 [2024-07-24 20:04:45.683182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.085 [2024-07-24 20:04:45.691145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.085 [2024-07-24 20:04:45.691198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.085 [2024-07-24 20:04:45.699164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.085 [2024-07-24 20:04:45.699213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.085 [2024-07-24 20:04:45.707177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.085 [2024-07-24 20:04:45.707224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.085 [2024-07-24 20:04:45.715214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.085 [2024-07-24 20:04:45.715268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.085 [2024-07-24 20:04:45.723193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.085 [2024-07-24 20:04:45.723224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.085 [2024-07-24 20:04:45.731205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.085 [2024-07-24 20:04:45.731235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.085 [2024-07-24 20:04:45.739226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.085 [2024-07-24 20:04:45.739256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.085 [2024-07-24 20:04:45.747249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.085 [2024-07-24 20:04:45.747280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.085 [2024-07-24 20:04:45.755272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.085 [2024-07-24 20:04:45.755302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.085 [2024-07-24 20:04:45.763303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.085 [2024-07-24 20:04:45.763337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.086 [2024-07-24 20:04:45.771387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.086 [2024-07-24 20:04:45.771448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.086 [2024-07-24 20:04:45.779374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.086 [2024-07-24 20:04:45.779418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.086 [2024-07-24 20:04:45.787378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.086 [2024-07-24 20:04:45.787417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.086 [2024-07-24 20:04:45.795386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.086 [2024-07-24 20:04:45.795416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.086 [2024-07-24 20:04:45.803409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.086 [2024-07-24 20:04:45.803452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.086 [2024-07-24 20:04:45.819486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.086 [2024-07-24 20:04:45.819520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.086 [2024-07-24 20:04:45.827494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.086 [2024-07-24 20:04:45.827524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.086 [2024-07-24 20:04:45.835570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.086 [2024-07-24 20:04:45.835618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.086 [2024-07-24 20:04:45.843582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.086 [2024-07-24 20:04:45.843628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.086 [2024-07-24 20:04:45.851588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.086 [2024-07-24 20:04:45.851636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.086 [2024-07-24 20:04:45.859574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.086 [2024-07-24 20:04:45.859604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.086 [2024-07-24 20:04:45.867593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.086 [2024-07-24 20:04:45.867623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.344 [2024-07-24 20:04:45.875617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.344 [2024-07-24 20:04:45.875647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1973282) - No such process 00:10:42.344 20:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1973282 00:10:42.344 20:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.344 20:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.344 20:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.344 20:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.344 20:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:42.344 20:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.344 20:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.344 delay0 00:10:42.344 20:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.344 20:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:42.344 20:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.344 20:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.344 20:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.344 20:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:42.344 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.344 [2024-07-24 20:04:46.074657] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:50.486 Initializing NVMe Controllers 00:10:50.487 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:50.487 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:50.487 Initialization complete. Launching workers. 00:10:50.487 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 259, failed: 12499 00:10:50.487 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 12664, failed to submit 94 00:10:50.487 success 12562, unsuccess 102, failed 0 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:50.487 rmmod nvme_tcp 00:10:50.487 rmmod nvme_fabrics 00:10:50.487 rmmod nvme_keyring 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1971914 ']' 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1971914 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1971914 ']' 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1971914 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1971914 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1971914' 00:10:50.487 killing process with pid 1971914 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1971914 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1971914 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.487 20:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:52.390 00:10:52.390 real 0m30.621s 00:10:52.390 user 0m42.961s 00:10:52.390 sys 0m11.032s 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:52.390 ************************************ 00:10:52.390 END TEST nvmf_zcopy 00:10:52.390 ************************************ 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:52.390 ************************************ 00:10:52.390 START TEST nvmf_nmic 00:10:52.390 ************************************ 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:52.390 * Looking for test storage... 00:10:52.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:10:52.390 20:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.920 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:54.920 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:10:54.920 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:54.920 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:54.920 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:54.920 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:54.920 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:54.920 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:10:54.920 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:54.920 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:10:54.920 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:10:54.920 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:10:54.920 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:10:54.920 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:10:54.920 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:10:54.920 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:54.920 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:54.920 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:54.921 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:54.921 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:54.921 Found net devices under 0000:84:00.0: cvl_0_0 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:54.921 Found net devices under 0000:84:00.1: cvl_0_1 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:54.921 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:55.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:10:55.183 00:10:55.183 --- 10.0.0.2 ping statistics --- 00:10:55.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.183 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:55.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:10:55.183 00:10:55.183 --- 10.0.0.1 ping statistics --- 00:10:55.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.183 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1976910 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1976910 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1976910 ']' 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:55.183 20:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:55.183 [2024-07-24 20:04:58.877242] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:10:55.183 [2024-07-24 20:04:58.877347] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.183 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.442 [2024-07-24 20:04:58.991397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:55.442 [2024-07-24 20:04:59.196356] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.442 [2024-07-24 20:04:59.196475] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.442 [2024-07-24 20:04:59.196521] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.442 [2024-07-24 20:04:59.196538] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.442 [2024-07-24 20:04:59.196560] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.442 [2024-07-24 20:04:59.196652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.442 [2024-07-24 20:04:59.196716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.442 [2024-07-24 20:04:59.196772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.442 [2024-07-24 20:04:59.196776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.378 20:04:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:56.378 20:04:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:56.378 20:04:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:56.378 20:04:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:56.378 20:04:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.378 20:04:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.378 20:04:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:56.378 20:04:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.378 20:04:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.378 [2024-07-24 20:04:59.956947] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.378 20:04:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.378 20:04:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:56.378 20:04:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.378 20:04:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.378 Malloc0 00:10:56.378 20:04:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.378 20:04:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:56.378 20:04:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.378 20:04:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.378 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.378 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:56.378 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.378 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.378 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.378 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.378 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.378 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.378 [2024-07-24 20:05:00.015253] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.378 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.378 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:56.378 test case1: single bdev can't be used in multiple subsystems 00:10:56.378 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:56.378 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.378 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.378 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.378 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:56.378 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.378 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.378 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.378 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:56.378 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:56.378 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.378 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.378 [2024-07-24 20:05:00.039025] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:56.378 [2024-07-24 20:05:00.039086] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:56.378 [2024-07-24 20:05:00.039108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.378 request: 00:10:56.378 { 00:10:56.378 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:56.378 "namespace": { 00:10:56.378 "bdev_name": "Malloc0", 00:10:56.378 "no_auto_visible": false 00:10:56.378 }, 00:10:56.378 "method": "nvmf_subsystem_add_ns", 00:10:56.378 "req_id": 1 00:10:56.378 } 00:10:56.378 Got JSON-RPC error response 00:10:56.378 response: 00:10:56.378 { 00:10:56.378 "code": -32602, 00:10:56.378 "message": "Invalid parameters" 00:10:56.378 } 00:10:56.378 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:56.379 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:56.379 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:56.379 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:56.379 Adding namespace failed - expected result. 00:10:56.379 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:56.379 test case2: host connect to nvmf target in multiple paths 00:10:56.379 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:56.379 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.379 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.379 [2024-07-24 20:05:00.051291] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:56.379 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.379 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:56.945 20:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:57.889 20:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:57.889 20:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:57.889 20:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:57.889 20:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:57.889 20:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:59.789 20:05:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:59.789 20:05:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:59.789 20:05:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:59.789 20:05:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:59.789 20:05:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:59.789 20:05:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:59.789 20:05:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:59.789 [global] 00:10:59.789 thread=1 00:10:59.789 invalidate=1 00:10:59.789 rw=write 00:10:59.789 time_based=1 00:10:59.789 runtime=1 00:10:59.789 ioengine=libaio 00:10:59.789 direct=1 00:10:59.789 bs=4096 00:10:59.789 iodepth=1 00:10:59.789 norandommap=0 00:10:59.789 numjobs=1 00:10:59.789 00:10:59.789 verify_dump=1 00:10:59.789 verify_backlog=512 00:10:59.789 verify_state_save=0 00:10:59.789 do_verify=1 00:10:59.789 verify=crc32c-intel 00:10:59.789 [job0] 00:10:59.789 filename=/dev/nvme0n1 00:10:59.789 Could not set queue depth (nvme0n1) 00:11:00.046 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.046 fio-3.35 00:11:00.046 Starting 1 thread 00:11:01.421 00:11:01.421 job0: (groupid=0, jobs=1): err= 0: pid=1977558: Wed Jul 24 20:05:04 2024 00:11:01.421 read: IOPS=20, BW=83.8KiB/s (85.8kB/s)(84.0KiB/1002msec) 00:11:01.421 slat (nsec): min=14891, max=39602, avg=20717.14, stdev=7542.20 00:11:01.421 clat (usec): min=40706, max=41201, avg=40962.22, stdev=100.65 00:11:01.421 lat (usec): min=40724, max=41217, avg=40982.94, stdev=99.73 00:11:01.421 clat percentiles (usec): 00:11:01.421 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:01.421 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:01.421 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:01.421 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:01.421 | 99.99th=[41157] 00:11:01.421 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:11:01.421 slat (nsec): min=7935, max=35330, avg=13859.36, stdev=4544.06 00:11:01.421 clat (usec): min=199, max=387, avg=257.33, stdev=35.86 00:11:01.421 lat (usec): min=211, max=423, avg=271.19, stdev=37.38 00:11:01.421 clat percentiles (usec): 00:11:01.421 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 225], 00:11:01.421 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 253], 60.00th=[ 265], 00:11:01.421 | 70.00th=[ 277], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 318], 00:11:01.421 | 99.00th=[ 338], 99.50th=[ 343], 99.90th=[ 388], 99.95th=[ 388], 00:11:01.421 | 99.99th=[ 388] 00:11:01.421 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:01.421 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:01.421 lat (usec) : 250=47.28%, 500=48.78% 00:11:01.421 lat (msec) : 50=3.94% 00:11:01.421 cpu : usr=0.50%, sys=0.80%, ctx=533, majf=0, minf=2 00:11:01.421 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.421 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.421 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.421 00:11:01.421 Run status group 0 (all jobs): 00:11:01.421 READ: bw=83.8KiB/s (85.8kB/s), 83.8KiB/s-83.8KiB/s (85.8kB/s-85.8kB/s), io=84.0KiB (86.0kB), run=1002-1002msec 00:11:01.421 WRITE: bw=2044KiB/s (2093kB/s), 2044KiB/s-2044KiB/s (2093kB/s-2093kB/s), io=2048KiB (2097kB), run=1002-1002msec 00:11:01.421 00:11:01.421 Disk stats (read/write): 00:11:01.421 nvme0n1: ios=68/512, merge=0/0, ticks=783/132, in_queue=915, util=92.08% 00:11:01.421 20:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:01.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:01.421 20:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:01.421 20:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:01.421 20:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:01.421 20:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.421 20:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:01.421 20:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.421 20:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:01.421 20:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:01.421 20:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:01.421 20:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:01.421 20:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:11:01.421 20:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:01.421 20:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:11:01.421 20:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:01.421 20:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:01.421 rmmod nvme_tcp 00:11:01.421 rmmod nvme_fabrics 00:11:01.421 rmmod nvme_keyring 00:11:01.421 20:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:01.421 20:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:11:01.421 20:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:11:01.422 20:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1976910 ']' 00:11:01.422 20:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1976910 00:11:01.422 20:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1976910 ']' 00:11:01.422 20:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1976910 00:11:01.422 20:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:11:01.422 20:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:01.422 20:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1976910 00:11:01.422 20:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:01.422 20:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:01.422 20:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1976910' 00:11:01.422 killing process with pid 1976910 00:11:01.422 20:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1976910 00:11:01.422 20:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1976910 00:11:01.989 20:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:01.989 20:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:01.989 20:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:01.989 20:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:01.989 20:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:01.989 20:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.989 20:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.989 20:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.892 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:03.892 00:11:03.892 real 0m11.713s 00:11:03.892 user 0m26.243s 00:11:03.892 sys 0m3.166s 00:11:03.892 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.892 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:03.892 ************************************ 00:11:03.892 END TEST nvmf_nmic 00:11:03.892 ************************************ 00:11:03.892 20:05:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:03.892 20:05:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:03.892 20:05:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.892 20:05:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:03.892 ************************************ 00:11:03.892 START TEST nvmf_fio_target 00:11:03.892 ************************************ 00:11:03.892 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:03.892 * Looking for test storage... 00:11:03.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:03.892 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:03.892 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:03.892 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.892 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.892 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.892 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.892 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.892 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.892 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.892 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.892 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.151 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.151 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:04.151 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:04.151 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.151 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.151 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:04.151 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.151 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:04.151 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.151 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.151 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:11:04.152 20:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:06.688 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.688 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:06.689 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:06.689 Found net devices under 0000:84:00.0: cvl_0_0 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:06.689 Found net devices under 0000:84:00.1: cvl_0_1 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:06.689 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:06.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:11:06.948 00:11:06.948 --- 10.0.0.2 ping statistics --- 00:11:06.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.948 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:06.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:11:06.948 00:11:06.948 --- 10.0.0.1 ping statistics --- 00:11:06.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.948 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1979777 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1979777 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1979777 ']' 00:11:06.948 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.949 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:06.949 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.949 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:06.949 20:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.949 [2024-07-24 20:05:10.705589] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:11:06.949 [2024-07-24 20:05:10.705727] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.208 EAL: No free 2048 kB hugepages reported on node 1 00:11:07.208 [2024-07-24 20:05:10.822162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:07.208 [2024-07-24 20:05:10.992400] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.208 [2024-07-24 20:05:10.992498] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.208 [2024-07-24 20:05:10.992519] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.208 [2024-07-24 20:05:10.992535] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.208 [2024-07-24 20:05:10.992548] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.208 [2024-07-24 20:05:10.992620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.208 [2024-07-24 20:05:10.992677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.208 [2024-07-24 20:05:10.992733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.208 [2024-07-24 20:05:10.992737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.466 20:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:07.466 20:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:11:07.466 20:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:07.466 20:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:07.466 20:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.466 20:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.466 20:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:07.724 [2024-07-24 20:05:11.508751] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.983 20:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:08.549 20:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:08.549 20:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:08.807 20:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:08.807 20:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:09.066 20:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:09.066 20:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:09.632 20:05:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:09.632 20:05:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:10.198 20:05:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:10.456 20:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:10.456 20:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:11.023 20:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:11.023 20:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:11.281 20:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:11.281 20:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:11.539 20:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:11.797 20:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:11.797 20:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:12.363 20:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:12.363 20:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:12.929 20:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.204 [2024-07-24 20:05:16.756879] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.204 20:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:13.500 20:05:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:13.758 20:05:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:14.324 20:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:14.324 20:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:14.324 20:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:14.324 20:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:14.324 20:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:14.324 20:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:16.855 20:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:16.855 20:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:16.855 20:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:16.855 20:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:16.855 20:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:16.855 20:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:16.855 20:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:16.855 [global] 00:11:16.855 thread=1 00:11:16.855 invalidate=1 00:11:16.855 rw=write 00:11:16.855 time_based=1 00:11:16.855 runtime=1 00:11:16.855 ioengine=libaio 00:11:16.855 direct=1 00:11:16.855 bs=4096 00:11:16.855 iodepth=1 00:11:16.855 norandommap=0 00:11:16.855 numjobs=1 00:11:16.855 00:11:16.855 verify_dump=1 00:11:16.855 verify_backlog=512 00:11:16.855 verify_state_save=0 00:11:16.855 do_verify=1 00:11:16.855 verify=crc32c-intel 00:11:16.855 [job0] 00:11:16.855 filename=/dev/nvme0n1 00:11:16.855 [job1] 00:11:16.855 filename=/dev/nvme0n2 00:11:16.855 [job2] 00:11:16.855 filename=/dev/nvme0n3 00:11:16.855 [job3] 00:11:16.855 filename=/dev/nvme0n4 00:11:16.855 Could not set queue depth (nvme0n1) 00:11:16.855 Could not set queue depth (nvme0n2) 00:11:16.855 Could not set queue depth (nvme0n3) 00:11:16.855 Could not set queue depth (nvme0n4) 00:11:16.855 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:16.855 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:16.855 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:16.855 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:16.855 fio-3.35 00:11:16.855 Starting 4 threads 00:11:18.230 00:11:18.230 job0: (groupid=0, jobs=1): err= 0: pid=1980993: Wed Jul 24 20:05:21 2024 00:11:18.230 read: IOPS=1180, BW=4724KiB/s (4837kB/s)(4856KiB/1028msec) 00:11:18.230 slat (nsec): min=6622, max=31542, avg=9585.94, stdev=3155.65 00:11:18.230 clat (usec): min=303, max=41124, avg=460.10, stdev=1647.73 00:11:18.230 lat (usec): min=310, max=41134, avg=469.69, stdev=1647.93 00:11:18.230 clat percentiles (usec): 00:11:18.230 | 1.00th=[ 314], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 347], 00:11:18.230 | 30.00th=[ 355], 40.00th=[ 375], 50.00th=[ 383], 60.00th=[ 392], 00:11:18.230 | 70.00th=[ 400], 80.00th=[ 424], 90.00th=[ 490], 95.00th=[ 519], 00:11:18.230 | 99.00th=[ 570], 99.50th=[ 578], 99.90th=[40633], 99.95th=[41157], 00:11:18.230 | 99.99th=[41157] 00:11:18.230 write: IOPS=1494, BW=5977KiB/s (6120kB/s)(6144KiB/1028msec); 0 zone resets 00:11:18.230 slat (nsec): min=7647, max=38557, avg=13240.12, stdev=4602.46 00:11:18.230 clat (usec): min=191, max=4596, avg=278.33, stdev=147.58 00:11:18.230 lat (usec): min=203, max=4608, avg=291.57, stdev=147.86 00:11:18.230 clat percentiles (usec): 00:11:18.230 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 229], 00:11:18.230 | 30.00th=[ 239], 40.00th=[ 251], 50.00th=[ 262], 60.00th=[ 277], 00:11:18.230 | 70.00th=[ 293], 80.00th=[ 314], 90.00th=[ 343], 95.00th=[ 359], 00:11:18.230 | 99.00th=[ 416], 99.50th=[ 457], 99.90th=[ 3163], 99.95th=[ 4621], 00:11:18.230 | 99.99th=[ 4621] 00:11:18.230 bw ( KiB/s): min= 4696, max= 7592, per=44.31%, avg=6144.00, stdev=2047.78, samples=2 00:11:18.230 iops : min= 1174, max= 1898, avg=1536.00, stdev=511.95, samples=2 00:11:18.230 lat (usec) : 250=21.78%, 500=74.55%, 750=3.49% 00:11:18.230 lat (msec) : 2=0.04%, 4=0.04%, 10=0.04%, 50=0.07% 00:11:18.230 cpu : usr=2.24%, sys=4.28%, ctx=2750, majf=0, minf=1 00:11:18.230 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.230 issued rwts: total=1214,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.230 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.230 job1: (groupid=0, jobs=1): err= 0: pid=1980994: Wed Jul 24 20:05:21 2024 00:11:18.230 read: IOPS=20, BW=81.2KiB/s (83.2kB/s)(84.0KiB/1034msec) 00:11:18.230 slat (nsec): min=9340, max=16051, avg=14725.57, stdev=1282.98 00:11:18.230 clat (usec): min=40940, max=41553, avg=41027.73, stdev=161.83 00:11:18.230 lat (usec): min=40955, max=41568, avg=41042.45, stdev=161.11 00:11:18.230 clat percentiles (usec): 00:11:18.230 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:18.230 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:18.230 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:11:18.230 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:18.230 | 99.99th=[41681] 00:11:18.230 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:11:18.230 slat (nsec): min=8195, max=53656, avg=14965.54, stdev=5589.84 00:11:18.230 clat (usec): min=208, max=1044, avg=317.23, stdev=80.65 00:11:18.230 lat (usec): min=220, max=1052, avg=332.19, stdev=81.90 00:11:18.230 clat percentiles (usec): 00:11:18.230 | 1.00th=[ 217], 5.00th=[ 229], 10.00th=[ 241], 20.00th=[ 260], 00:11:18.230 | 30.00th=[ 281], 40.00th=[ 297], 50.00th=[ 310], 60.00th=[ 318], 00:11:18.230 | 70.00th=[ 334], 80.00th=[ 351], 90.00th=[ 379], 95.00th=[ 498], 00:11:18.230 | 99.00th=[ 586], 99.50th=[ 619], 99.90th=[ 1045], 99.95th=[ 1045], 00:11:18.230 | 99.99th=[ 1045] 00:11:18.230 bw ( KiB/s): min= 4096, max= 4096, per=29.54%, avg=4096.00, stdev= 0.00, samples=1 00:11:18.230 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:18.230 lat (usec) : 250=14.63%, 500=76.74%, 750=4.32%, 1000=0.19% 00:11:18.230 lat (msec) : 2=0.19%, 50=3.94% 00:11:18.230 cpu : usr=0.58%, sys=0.48%, ctx=534, majf=0, minf=1 00:11:18.230 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.230 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.230 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.231 job2: (groupid=0, jobs=1): err= 0: pid=1980995: Wed Jul 24 20:05:21 2024 00:11:18.231 read: IOPS=378, BW=1513KiB/s (1549kB/s)(1516KiB/1002msec) 00:11:18.231 slat (nsec): min=6291, max=35705, avg=13067.16, stdev=5898.06 00:11:18.231 clat (usec): min=298, max=42060, avg=2170.90, stdev=8444.62 00:11:18.231 lat (usec): min=307, max=42068, avg=2183.97, stdev=8445.17 00:11:18.231 clat percentiles (usec): 00:11:18.231 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 318], 00:11:18.231 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 343], 60.00th=[ 355], 00:11:18.231 | 70.00th=[ 359], 80.00th=[ 367], 90.00th=[ 379], 95.00th=[ 498], 00:11:18.231 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:18.231 | 99.99th=[42206] 00:11:18.231 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:11:18.231 slat (nsec): min=7563, max=58007, avg=15726.33, stdev=7473.75 00:11:18.231 clat (usec): min=227, max=550, avg=316.15, stdev=50.13 00:11:18.231 lat (usec): min=239, max=585, avg=331.87, stdev=51.52 00:11:18.231 clat percentiles (usec): 00:11:18.231 | 1.00th=[ 235], 5.00th=[ 251], 10.00th=[ 262], 20.00th=[ 273], 00:11:18.231 | 30.00th=[ 285], 40.00th=[ 297], 50.00th=[ 310], 60.00th=[ 322], 00:11:18.231 | 70.00th=[ 334], 80.00th=[ 347], 90.00th=[ 383], 95.00th=[ 404], 00:11:18.231 | 99.00th=[ 482], 99.50th=[ 498], 99.90th=[ 553], 99.95th=[ 553], 00:11:18.231 | 99.99th=[ 553] 00:11:18.231 bw ( KiB/s): min= 4096, max= 4096, per=29.54%, avg=4096.00, stdev= 0.00, samples=1 00:11:18.231 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:18.231 lat (usec) : 250=2.58%, 500=95.17%, 750=0.34% 00:11:18.231 lat (msec) : 50=1.91% 00:11:18.231 cpu : usr=1.10%, sys=0.90%, ctx=892, majf=0, minf=2 00:11:18.231 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.231 issued rwts: total=379,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.231 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.231 job3: (groupid=0, jobs=1): err= 0: pid=1980996: Wed Jul 24 20:05:21 2024 00:11:18.231 read: IOPS=532, BW=2128KiB/s (2179kB/s)(2160KiB/1015msec) 00:11:18.231 slat (nsec): min=6824, max=34892, avg=15462.13, stdev=5907.47 00:11:18.231 clat (usec): min=296, max=41291, avg=1301.02, stdev=5992.92 00:11:18.231 lat (usec): min=303, max=41299, avg=1316.48, stdev=5993.14 00:11:18.231 clat percentiles (usec): 00:11:18.231 | 1.00th=[ 314], 5.00th=[ 326], 10.00th=[ 338], 20.00th=[ 359], 00:11:18.231 | 30.00th=[ 371], 40.00th=[ 383], 50.00th=[ 396], 60.00th=[ 408], 00:11:18.231 | 70.00th=[ 420], 80.00th=[ 437], 90.00th=[ 482], 95.00th=[ 498], 00:11:18.231 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:18.231 | 99.99th=[41157] 00:11:18.231 write: IOPS=1008, BW=4035KiB/s (4132kB/s)(4096KiB/1015msec); 0 zone resets 00:11:18.231 slat (nsec): min=8260, max=53653, avg=14431.71, stdev=6480.12 00:11:18.231 clat (usec): min=208, max=508, avg=276.58, stdev=43.64 00:11:18.231 lat (usec): min=218, max=521, avg=291.01, stdev=44.10 00:11:18.231 clat percentiles (usec): 00:11:18.231 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 239], 00:11:18.231 | 30.00th=[ 247], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 281], 00:11:18.231 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 338], 95.00th=[ 363], 00:11:18.231 | 99.00th=[ 408], 99.50th=[ 445], 99.90th=[ 465], 99.95th=[ 510], 00:11:18.231 | 99.99th=[ 510] 00:11:18.231 bw ( KiB/s): min= 4096, max= 4096, per=29.54%, avg=4096.00, stdev= 0.00, samples=2 00:11:18.231 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:11:18.231 lat (usec) : 250=22.06%, 500=76.41%, 750=0.77% 00:11:18.231 lat (msec) : 50=0.77% 00:11:18.231 cpu : usr=0.99%, sys=2.46%, ctx=1566, majf=0, minf=1 00:11:18.231 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.231 issued rwts: total=540,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.231 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.231 00:11:18.231 Run status group 0 (all jobs): 00:11:18.231 READ: bw=8333KiB/s (8533kB/s), 81.2KiB/s-4724KiB/s (83.2kB/s-4837kB/s), io=8616KiB (8823kB), run=1002-1034msec 00:11:18.231 WRITE: bw=13.5MiB/s (14.2MB/s), 1981KiB/s-5977KiB/s (2028kB/s-6120kB/s), io=14.0MiB (14.7MB), run=1002-1034msec 00:11:18.231 00:11:18.231 Disk stats (read/write): 00:11:18.231 nvme0n1: ios=1074/1464, merge=0/0, ticks=425/381, in_queue=806, util=86.17% 00:11:18.231 nvme0n2: ios=44/512, merge=0/0, ticks=675/162, in_queue=837, util=86.63% 00:11:18.231 nvme0n3: ios=318/512, merge=0/0, ticks=824/157, in_queue=981, util=90.19% 00:11:18.231 nvme0n4: ios=569/870, merge=0/0, ticks=1191/239, in_queue=1430, util=97.23% 00:11:18.231 20:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:18.231 [global] 00:11:18.231 thread=1 00:11:18.231 invalidate=1 00:11:18.231 rw=randwrite 00:11:18.231 time_based=1 00:11:18.231 runtime=1 00:11:18.231 ioengine=libaio 00:11:18.231 direct=1 00:11:18.231 bs=4096 00:11:18.231 iodepth=1 00:11:18.231 norandommap=0 00:11:18.231 numjobs=1 00:11:18.231 00:11:18.231 verify_dump=1 00:11:18.231 verify_backlog=512 00:11:18.231 verify_state_save=0 00:11:18.231 do_verify=1 00:11:18.231 verify=crc32c-intel 00:11:18.231 [job0] 00:11:18.231 filename=/dev/nvme0n1 00:11:18.231 [job1] 00:11:18.231 filename=/dev/nvme0n2 00:11:18.231 [job2] 00:11:18.231 filename=/dev/nvme0n3 00:11:18.231 [job3] 00:11:18.231 filename=/dev/nvme0n4 00:11:18.231 Could not set queue depth (nvme0n1) 00:11:18.231 Could not set queue depth (nvme0n2) 00:11:18.231 Could not set queue depth (nvme0n3) 00:11:18.231 Could not set queue depth (nvme0n4) 00:11:18.231 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.231 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.231 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.231 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.231 fio-3.35 00:11:18.231 Starting 4 threads 00:11:19.610 00:11:19.610 job0: (groupid=0, jobs=1): err= 0: pid=1981347: Wed Jul 24 20:05:23 2024 00:11:19.610 read: IOPS=255, BW=1022KiB/s (1046kB/s)(1028KiB/1006msec) 00:11:19.610 slat (nsec): min=6574, max=33539, avg=10349.07, stdev=4659.71 00:11:19.610 clat (usec): min=290, max=42173, avg=3206.64, stdev=10470.92 00:11:19.610 lat (usec): min=298, max=42187, avg=3216.99, stdev=10472.55 00:11:19.610 clat percentiles (usec): 00:11:19.610 | 1.00th=[ 293], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 314], 00:11:19.610 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 343], 00:11:19.610 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 392], 95.00th=[41157], 00:11:19.610 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:19.610 | 99.99th=[42206] 00:11:19.610 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:11:19.610 slat (nsec): min=10610, max=48266, avg=12428.71, stdev=3471.08 00:11:19.610 clat (usec): min=241, max=432, avg=331.46, stdev=37.41 00:11:19.610 lat (usec): min=257, max=444, avg=343.89, stdev=36.53 00:11:19.610 clat percentiles (usec): 00:11:19.610 | 1.00th=[ 249], 5.00th=[ 269], 10.00th=[ 281], 20.00th=[ 306], 00:11:19.610 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 334], 00:11:19.610 | 70.00th=[ 343], 80.00th=[ 351], 90.00th=[ 396], 95.00th=[ 404], 00:11:19.610 | 99.00th=[ 420], 99.50th=[ 424], 99.90th=[ 433], 99.95th=[ 433], 00:11:19.610 | 99.99th=[ 433] 00:11:19.610 bw ( KiB/s): min= 4096, max= 4096, per=28.15%, avg=4096.00, stdev= 0.00, samples=1 00:11:19.610 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:19.610 lat (usec) : 250=0.78%, 500=96.62%, 1000=0.13% 00:11:19.610 lat (msec) : 2=0.13%, 50=2.34% 00:11:19.610 cpu : usr=0.90%, sys=0.50%, ctx=769, majf=0, minf=2 00:11:19.610 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.610 issued rwts: total=257,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.610 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.610 job1: (groupid=0, jobs=1): err= 0: pid=1981348: Wed Jul 24 20:05:23 2024 00:11:19.610 read: IOPS=365, BW=1463KiB/s (1498kB/s)(1464KiB/1001msec) 00:11:19.610 slat (nsec): min=5772, max=41333, avg=12307.19, stdev=5149.51 00:11:19.610 clat (usec): min=274, max=42001, avg=2234.13, stdev=8599.81 00:11:19.610 lat (usec): min=281, max=42017, avg=2246.44, stdev=8600.86 00:11:19.610 clat percentiles (usec): 00:11:19.610 | 1.00th=[ 277], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 314], 00:11:19.610 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 343], 00:11:19.610 | 70.00th=[ 355], 80.00th=[ 367], 90.00th=[ 400], 95.00th=[ 537], 00:11:19.610 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:11:19.610 | 99.99th=[42206] 00:11:19.610 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:19.610 slat (nsec): min=7564, max=23662, avg=11928.73, stdev=2120.35 00:11:19.610 clat (usec): min=207, max=583, avg=329.50, stdev=53.87 00:11:19.610 lat (usec): min=215, max=594, avg=341.43, stdev=54.05 00:11:19.610 clat percentiles (usec): 00:11:19.610 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 262], 20.00th=[ 289], 00:11:19.610 | 30.00th=[ 302], 40.00th=[ 314], 50.00th=[ 326], 60.00th=[ 343], 00:11:19.610 | 70.00th=[ 367], 80.00th=[ 379], 90.00th=[ 392], 95.00th=[ 408], 00:11:19.610 | 99.00th=[ 433], 99.50th=[ 474], 99.90th=[ 586], 99.95th=[ 586], 00:11:19.610 | 99.99th=[ 586] 00:11:19.610 bw ( KiB/s): min= 4096, max= 4096, per=28.15%, avg=4096.00, stdev= 0.00, samples=1 00:11:19.610 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:19.610 lat (usec) : 250=5.13%, 500=92.48%, 750=0.46% 00:11:19.610 lat (msec) : 50=1.94% 00:11:19.610 cpu : usr=0.90%, sys=1.00%, ctx=879, majf=0, minf=1 00:11:19.610 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.610 issued rwts: total=366,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.610 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.610 job2: (groupid=0, jobs=1): err= 0: pid=1981349: Wed Jul 24 20:05:23 2024 00:11:19.610 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:11:19.610 slat (nsec): min=6657, max=44771, avg=15384.08, stdev=5396.43 00:11:19.610 clat (usec): min=299, max=41304, avg=572.35, stdev=2196.65 00:11:19.610 lat (usec): min=309, max=41316, avg=587.74, stdev=2196.82 00:11:19.610 clat percentiles (usec): 00:11:19.610 | 1.00th=[ 343], 5.00th=[ 371], 10.00th=[ 388], 20.00th=[ 404], 00:11:19.610 | 30.00th=[ 420], 40.00th=[ 433], 50.00th=[ 445], 60.00th=[ 461], 00:11:19.610 | 70.00th=[ 478], 80.00th=[ 498], 90.00th=[ 523], 95.00th=[ 537], 00:11:19.610 | 99.00th=[ 693], 99.50th=[ 996], 99.90th=[41157], 99.95th=[41157], 00:11:19.610 | 99.99th=[41157] 00:11:19.610 write: IOPS=1239, BW=4959KiB/s (5078kB/s)(4964KiB/1001msec); 0 zone resets 00:11:19.610 slat (nsec): min=7704, max=70212, avg=15042.50, stdev=6230.46 00:11:19.610 clat (usec): min=209, max=483, avg=298.45, stdev=42.94 00:11:19.610 lat (usec): min=218, max=492, avg=313.49, stdev=43.47 00:11:19.610 clat percentiles (usec): 00:11:19.610 | 1.00th=[ 221], 5.00th=[ 235], 10.00th=[ 243], 20.00th=[ 258], 00:11:19.610 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 314], 00:11:19.610 | 70.00th=[ 322], 80.00th=[ 330], 90.00th=[ 351], 95.00th=[ 375], 00:11:19.610 | 99.00th=[ 420], 99.50th=[ 433], 99.90th=[ 453], 99.95th=[ 486], 00:11:19.610 | 99.99th=[ 486] 00:11:19.610 bw ( KiB/s): min= 4248, max= 4248, per=29.19%, avg=4248.00, stdev= 0.00, samples=1 00:11:19.610 iops : min= 1062, max= 1062, avg=1062.00, stdev= 0.00, samples=1 00:11:19.610 lat (usec) : 250=8.17%, 500=83.58%, 750=7.90%, 1000=0.13% 00:11:19.610 lat (msec) : 2=0.09%, 50=0.13% 00:11:19.610 cpu : usr=1.40%, sys=4.00%, ctx=2266, majf=0, minf=1 00:11:19.610 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.610 issued rwts: total=1024,1241,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.610 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.610 job3: (groupid=0, jobs=1): err= 0: pid=1981350: Wed Jul 24 20:05:23 2024 00:11:19.610 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:11:19.610 slat (nsec): min=6649, max=27931, avg=15613.31, stdev=2699.83 00:11:19.610 clat (usec): min=340, max=2195, avg=498.59, stdev=68.76 00:11:19.611 lat (usec): min=358, max=2211, avg=514.20, stdev=68.80 00:11:19.611 clat percentiles (usec): 00:11:19.611 | 1.00th=[ 383], 5.00th=[ 420], 10.00th=[ 441], 20.00th=[ 461], 00:11:19.611 | 30.00th=[ 482], 40.00th=[ 490], 50.00th=[ 502], 60.00th=[ 510], 00:11:19.611 | 70.00th=[ 523], 80.00th=[ 537], 90.00th=[ 545], 95.00th=[ 562], 00:11:19.611 | 99.00th=[ 594], 99.50th=[ 619], 99.90th=[ 725], 99.95th=[ 2212], 00:11:19.611 | 99.99th=[ 2212] 00:11:19.611 write: IOPS=1393, BW=5574KiB/s (5708kB/s)(5580KiB/1001msec); 0 zone resets 00:11:19.611 slat (nsec): min=8544, max=50078, avg=14877.64, stdev=5558.18 00:11:19.611 clat (usec): min=208, max=499, avg=317.76, stdev=47.22 00:11:19.611 lat (usec): min=218, max=518, avg=332.64, stdev=47.87 00:11:19.611 clat percentiles (usec): 00:11:19.611 | 1.00th=[ 223], 5.00th=[ 239], 10.00th=[ 255], 20.00th=[ 277], 00:11:19.611 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 326], 00:11:19.611 | 70.00th=[ 338], 80.00th=[ 359], 90.00th=[ 388], 95.00th=[ 396], 00:11:19.611 | 99.00th=[ 420], 99.50th=[ 445], 99.90th=[ 486], 99.95th=[ 498], 00:11:19.611 | 99.99th=[ 498] 00:11:19.611 bw ( KiB/s): min= 5368, max= 5368, per=36.89%, avg=5368.00, stdev= 0.00, samples=1 00:11:19.611 iops : min= 1342, max= 1342, avg=1342.00, stdev= 0.00, samples=1 00:11:19.611 lat (usec) : 250=4.80%, 500=73.75%, 750=21.41% 00:11:19.611 lat (msec) : 4=0.04% 00:11:19.611 cpu : usr=2.00%, sys=3.60%, ctx=2420, majf=0, minf=1 00:11:19.611 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.611 issued rwts: total=1024,1395,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.611 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.611 00:11:19.611 Run status group 0 (all jobs): 00:11:19.611 READ: bw=10.4MiB/s (10.9MB/s), 1022KiB/s-4092KiB/s (1046kB/s-4190kB/s), io=10.4MiB (10.9MB), run=1001-1006msec 00:11:19.611 WRITE: bw=14.2MiB/s (14.9MB/s), 2036KiB/s-5574KiB/s (2085kB/s-5708kB/s), io=14.3MiB (15.0MB), run=1001-1006msec 00:11:19.611 00:11:19.611 Disk stats (read/write): 00:11:19.611 nvme0n1: ios=133/512, merge=0/0, ticks=750/167, in_queue=917, util=91.48% 00:11:19.611 nvme0n2: ios=269/512, merge=0/0, ticks=995/166, in_queue=1161, util=98.55% 00:11:19.611 nvme0n3: ios=828/1024, merge=0/0, ticks=1305/306, in_queue=1611, util=98.67% 00:11:19.611 nvme0n4: ios=899/1024, merge=0/0, ticks=1332/315, in_queue=1647, util=99.32% 00:11:19.611 20:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:19.611 [global] 00:11:19.611 thread=1 00:11:19.611 invalidate=1 00:11:19.611 rw=write 00:11:19.611 time_based=1 00:11:19.611 runtime=1 00:11:19.611 ioengine=libaio 00:11:19.611 direct=1 00:11:19.611 bs=4096 00:11:19.611 iodepth=128 00:11:19.611 norandommap=0 00:11:19.611 numjobs=1 00:11:19.611 00:11:19.611 verify_dump=1 00:11:19.611 verify_backlog=512 00:11:19.611 verify_state_save=0 00:11:19.611 do_verify=1 00:11:19.611 verify=crc32c-intel 00:11:19.611 [job0] 00:11:19.611 filename=/dev/nvme0n1 00:11:19.611 [job1] 00:11:19.611 filename=/dev/nvme0n2 00:11:19.611 [job2] 00:11:19.611 filename=/dev/nvme0n3 00:11:19.611 [job3] 00:11:19.611 filename=/dev/nvme0n4 00:11:19.611 Could not set queue depth (nvme0n1) 00:11:19.611 Could not set queue depth (nvme0n2) 00:11:19.611 Could not set queue depth (nvme0n3) 00:11:19.611 Could not set queue depth (nvme0n4) 00:11:19.869 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:19.869 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:19.869 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:19.869 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:19.869 fio-3.35 00:11:19.869 Starting 4 threads 00:11:21.240 00:11:21.240 job0: (groupid=0, jobs=1): err= 0: pid=1981588: Wed Jul 24 20:05:24 2024 00:11:21.240 read: IOPS=2428, BW=9714KiB/s (9947kB/s)(9772KiB/1006msec) 00:11:21.240 slat (usec): min=2, max=31298, avg=210.79, stdev=1445.76 00:11:21.240 clat (usec): min=3833, max=77332, avg=27550.41, stdev=14566.56 00:11:21.240 lat (usec): min=8595, max=77339, avg=27761.20, stdev=14628.17 00:11:21.240 clat percentiles (usec): 00:11:21.240 | 1.00th=[10945], 5.00th=[13698], 10.00th=[13960], 20.00th=[14746], 00:11:21.240 | 30.00th=[16909], 40.00th=[22938], 50.00th=[24511], 60.00th=[27919], 00:11:21.240 | 70.00th=[29754], 80.00th=[34341], 90.00th=[44827], 95.00th=[59507], 00:11:21.240 | 99.00th=[77071], 99.50th=[77071], 99.90th=[77071], 99.95th=[77071], 00:11:21.240 | 99.99th=[77071] 00:11:21.240 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:11:21.240 slat (usec): min=4, max=17863, avg=180.92, stdev=1073.05 00:11:21.240 clat (usec): min=781, max=91280, avg=23573.76, stdev=15764.49 00:11:21.240 lat (usec): min=801, max=91287, avg=23754.68, stdev=15855.84 00:11:21.240 clat percentiles (usec): 00:11:21.240 | 1.00th=[ 5342], 5.00th=[10290], 10.00th=[12125], 20.00th=[14222], 00:11:21.240 | 30.00th=[15401], 40.00th=[16188], 50.00th=[17171], 60.00th=[19792], 00:11:21.240 | 70.00th=[21365], 80.00th=[28705], 90.00th=[47973], 95.00th=[58983], 00:11:21.240 | 99.00th=[80217], 99.50th=[83362], 99.90th=[91751], 99.95th=[91751], 00:11:21.240 | 99.99th=[91751] 00:11:21.240 bw ( KiB/s): min= 9488, max=10992, per=20.07%, avg=10240.00, stdev=1063.49, samples=2 00:11:21.240 iops : min= 2372, max= 2748, avg=2560.00, stdev=265.87, samples=2 00:11:21.240 lat (usec) : 1000=0.04% 00:11:21.240 lat (msec) : 4=0.02%, 10=2.50%, 20=45.35%, 50=45.61%, 100=6.48% 00:11:21.240 cpu : usr=1.89%, sys=3.28%, ctx=243, majf=0, minf=1 00:11:21.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:11:21.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.240 issued rwts: total=2443,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.240 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.240 job1: (groupid=0, jobs=1): err= 0: pid=1981589: Wed Jul 24 20:05:24 2024 00:11:21.240 read: IOPS=3531, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1015msec) 00:11:21.240 slat (usec): min=2, max=15216, avg=132.84, stdev=942.96 00:11:21.240 clat (usec): min=4907, max=39150, avg=16731.50, stdev=4828.04 00:11:21.240 lat (usec): min=4915, max=39159, avg=16864.34, stdev=4898.60 00:11:21.240 clat percentiles (usec): 00:11:21.240 | 1.00th=[ 5735], 5.00th=[11731], 10.00th=[12911], 20.00th=[13829], 00:11:21.240 | 30.00th=[14222], 40.00th=[15139], 50.00th=[15664], 60.00th=[16450], 00:11:21.240 | 70.00th=[16909], 80.00th=[18482], 90.00th=[23987], 95.00th=[27919], 00:11:21.240 | 99.00th=[32375], 99.50th=[37487], 99.90th=[39060], 99.95th=[39060], 00:11:21.240 | 99.99th=[39060] 00:11:21.240 write: IOPS=3726, BW=14.6MiB/s (15.3MB/s)(14.8MiB/1015msec); 0 zone resets 00:11:21.240 slat (usec): min=4, max=21750, avg=132.02, stdev=820.22 00:11:21.241 clat (usec): min=4338, max=43390, avg=17345.22, stdev=7700.62 00:11:21.241 lat (usec): min=4348, max=43401, avg=17477.24, stdev=7751.77 00:11:21.241 clat percentiles (usec): 00:11:21.241 | 1.00th=[ 6652], 5.00th=[ 9503], 10.00th=[11731], 20.00th=[12780], 00:11:21.241 | 30.00th=[13435], 40.00th=[14222], 50.00th=[14746], 60.00th=[15795], 00:11:21.241 | 70.00th=[16712], 80.00th=[20579], 90.00th=[28443], 95.00th=[38011], 00:11:21.241 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:11:21.241 | 99.99th=[43254] 00:11:21.241 bw ( KiB/s): min=12856, max=16384, per=28.65%, avg=14620.00, stdev=2494.67, samples=2 00:11:21.241 iops : min= 3214, max= 4096, avg=3655.00, stdev=623.67, samples=2 00:11:21.241 lat (msec) : 10=4.39%, 20=77.49%, 50=18.12% 00:11:21.241 cpu : usr=3.75%, sys=4.24%, ctx=343, majf=0, minf=1 00:11:21.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:21.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.241 issued rwts: total=3584,3782,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.241 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.241 job2: (groupid=0, jobs=1): err= 0: pid=1981590: Wed Jul 24 20:05:24 2024 00:11:21.241 read: IOPS=2998, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1006msec) 00:11:21.241 slat (usec): min=2, max=28132, avg=174.47, stdev=1247.47 00:11:21.241 clat (usec): min=1824, max=62680, avg=21301.32, stdev=10675.07 00:11:21.241 lat (usec): min=5547, max=65071, avg=21475.80, stdev=10750.19 00:11:21.241 clat percentiles (usec): 00:11:21.241 | 1.00th=[ 5735], 5.00th=[ 9241], 10.00th=[11338], 20.00th=[16057], 00:11:21.241 | 30.00th=[16450], 40.00th=[17171], 50.00th=[17957], 60.00th=[19268], 00:11:21.241 | 70.00th=[20317], 80.00th=[26084], 90.00th=[37487], 95.00th=[47449], 00:11:21.241 | 99.00th=[62653], 99.50th=[62653], 99.90th=[62653], 99.95th=[62653], 00:11:21.241 | 99.99th=[62653] 00:11:21.241 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:11:21.241 slat (usec): min=4, max=38713, avg=148.98, stdev=1194.86 00:11:21.241 clat (usec): min=908, max=68922, avg=20258.18, stdev=9291.35 00:11:21.241 lat (usec): min=7025, max=69000, avg=20407.16, stdev=9339.41 00:11:21.241 clat percentiles (usec): 00:11:21.241 | 1.00th=[ 8848], 5.00th=[12256], 10.00th=[13829], 20.00th=[15926], 00:11:21.241 | 30.00th=[16450], 40.00th=[17171], 50.00th=[17695], 60.00th=[18220], 00:11:21.241 | 70.00th=[19006], 80.00th=[21103], 90.00th=[30016], 95.00th=[42730], 00:11:21.241 | 99.00th=[61604], 99.50th=[61604], 99.90th=[61604], 99.95th=[68682], 00:11:21.241 | 99.99th=[68682] 00:11:21.241 bw ( KiB/s): min=12288, max=12288, per=24.08%, avg=12288.00, stdev= 0.00, samples=2 00:11:21.241 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:11:21.241 lat (usec) : 1000=0.02% 00:11:21.241 lat (msec) : 2=0.02%, 10=5.57%, 20=65.16%, 50=26.82%, 100=2.41% 00:11:21.241 cpu : usr=1.89%, sys=3.98%, ctx=188, majf=0, minf=1 00:11:21.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:21.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.241 issued rwts: total=3016,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.241 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.241 job3: (groupid=0, jobs=1): err= 0: pid=1981594: Wed Jul 24 20:05:24 2024 00:11:21.241 read: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1015msec) 00:11:21.241 slat (usec): min=2, max=21989, avg=154.35, stdev=1170.62 00:11:21.241 clat (usec): min=4921, max=45826, avg=19279.63, stdev=5232.83 00:11:21.241 lat (usec): min=5730, max=45834, avg=19433.98, stdev=5296.55 00:11:21.241 clat percentiles (usec): 00:11:21.241 | 1.00th=[10945], 5.00th=[12387], 10.00th=[15401], 20.00th=[15664], 00:11:21.241 | 30.00th=[15795], 40.00th=[16909], 50.00th=[18482], 60.00th=[20055], 00:11:21.241 | 70.00th=[21103], 80.00th=[22676], 90.00th=[23725], 95.00th=[28705], 00:11:21.241 | 99.00th=[42206], 99.50th=[43254], 99.90th=[45876], 99.95th=[45876], 00:11:21.241 | 99.99th=[45876] 00:11:21.241 write: IOPS=3481, BW=13.6MiB/s (14.3MB/s)(13.8MiB/1015msec); 0 zone resets 00:11:21.241 slat (usec): min=4, max=19266, avg=140.60, stdev=943.79 00:11:21.241 clat (usec): min=3409, max=47966, avg=19495.21, stdev=6698.33 00:11:21.241 lat (usec): min=3417, max=47974, avg=19635.81, stdev=6786.54 00:11:21.241 clat percentiles (usec): 00:11:21.241 | 1.00th=[ 5538], 5.00th=[10945], 10.00th=[13829], 20.00th=[15795], 00:11:21.241 | 30.00th=[16581], 40.00th=[16909], 50.00th=[17957], 60.00th=[19792], 00:11:21.241 | 70.00th=[21365], 80.00th=[23200], 90.00th=[25822], 95.00th=[32375], 00:11:21.241 | 99.00th=[44827], 99.50th=[46924], 99.90th=[47973], 99.95th=[47973], 00:11:21.241 | 99.99th=[47973] 00:11:21.241 bw ( KiB/s): min=12464, max=14792, per=26.71%, avg=13628.00, stdev=1646.14, samples=2 00:11:21.241 iops : min= 3116, max= 3698, avg=3407.00, stdev=411.54, samples=2 00:11:21.241 lat (msec) : 4=0.09%, 10=2.66%, 20=57.96%, 50=39.28% 00:11:21.241 cpu : usr=2.27%, sys=4.64%, ctx=293, majf=0, minf=1 00:11:21.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:21.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.241 issued rwts: total=3072,3534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.241 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.241 00:11:21.241 Run status group 0 (all jobs): 00:11:21.241 READ: bw=46.6MiB/s (48.9MB/s), 9714KiB/s-13.8MiB/s (9947kB/s-14.5MB/s), io=47.3MiB (49.6MB), run=1006-1015msec 00:11:21.241 WRITE: bw=49.8MiB/s (52.3MB/s), 9.94MiB/s-14.6MiB/s (10.4MB/s-15.3MB/s), io=50.6MiB (53.0MB), run=1006-1015msec 00:11:21.241 00:11:21.241 Disk stats (read/write): 00:11:21.241 nvme0n1: ios=2086/2410, merge=0/0, ticks=23932/24911, in_queue=48843, util=96.59% 00:11:21.241 nvme0n2: ios=3123/3263, merge=0/0, ticks=37048/32938, in_queue=69986, util=96.33% 00:11:21.241 nvme0n3: ios=2219/2560, merge=0/0, ticks=21393/22718, in_queue=44111, util=87.16% 00:11:21.241 nvme0n4: ios=2579/2679, merge=0/0, ticks=36940/38206, in_queue=75146, util=96.92% 00:11:21.241 20:05:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:21.241 [global] 00:11:21.241 thread=1 00:11:21.241 invalidate=1 00:11:21.241 rw=randwrite 00:11:21.241 time_based=1 00:11:21.241 runtime=1 00:11:21.241 ioengine=libaio 00:11:21.241 direct=1 00:11:21.241 bs=4096 00:11:21.241 iodepth=128 00:11:21.241 norandommap=0 00:11:21.241 numjobs=1 00:11:21.241 00:11:21.241 verify_dump=1 00:11:21.241 verify_backlog=512 00:11:21.241 verify_state_save=0 00:11:21.241 do_verify=1 00:11:21.241 verify=crc32c-intel 00:11:21.241 [job0] 00:11:21.241 filename=/dev/nvme0n1 00:11:21.241 [job1] 00:11:21.241 filename=/dev/nvme0n2 00:11:21.241 [job2] 00:11:21.241 filename=/dev/nvme0n3 00:11:21.241 [job3] 00:11:21.241 filename=/dev/nvme0n4 00:11:21.241 Could not set queue depth (nvme0n1) 00:11:21.241 Could not set queue depth (nvme0n2) 00:11:21.241 Could not set queue depth (nvme0n3) 00:11:21.241 Could not set queue depth (nvme0n4) 00:11:21.241 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.241 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.241 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.241 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.241 fio-3.35 00:11:21.241 Starting 4 threads 00:11:22.621 00:11:22.621 job0: (groupid=0, jobs=1): err= 0: pid=1981827: Wed Jul 24 20:05:26 2024 00:11:22.621 read: IOPS=2507, BW=9.79MiB/s (10.3MB/s)(10.0MiB/1021msec) 00:11:22.621 slat (usec): min=3, max=37963, avg=157.82, stdev=1260.73 00:11:22.621 clat (usec): min=5126, max=49621, avg=18940.48, stdev=7275.43 00:11:22.621 lat (usec): min=5136, max=49640, avg=19098.30, stdev=7337.69 00:11:22.621 clat percentiles (usec): 00:11:22.621 | 1.00th=[ 7242], 5.00th=[13042], 10.00th=[14091], 20.00th=[14877], 00:11:22.621 | 30.00th=[15664], 40.00th=[15795], 50.00th=[15926], 60.00th=[16188], 00:11:22.621 | 70.00th=[16909], 80.00th=[23725], 90.00th=[29230], 95.00th=[40109], 00:11:22.621 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:11:22.621 | 99.99th=[49546] 00:11:22.621 write: IOPS=2944, BW=11.5MiB/s (12.1MB/s)(11.7MiB/1021msec); 0 zone resets 00:11:22.621 slat (usec): min=4, max=17497, avg=191.13, stdev=1061.91 00:11:22.621 clat (msec): min=2, max=143, avg=26.95, stdev=26.13 00:11:22.621 lat (msec): min=2, max=143, avg=27.14, stdev=26.29 00:11:22.621 clat percentiles (msec): 00:11:22.621 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 14], 00:11:22.621 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 17], 60.00th=[ 17], 00:11:22.621 | 70.00th=[ 26], 80.00th=[ 31], 90.00th=[ 61], 95.00th=[ 82], 00:11:22.621 | 99.00th=[ 133], 99.50th=[ 138], 99.90th=[ 144], 99.95th=[ 144], 00:11:22.621 | 99.99th=[ 144] 00:11:22.621 bw ( KiB/s): min= 6640, max=16384, per=21.71%, avg=11512.00, stdev=6890.05, samples=2 00:11:22.621 iops : min= 1660, max= 4096, avg=2878.00, stdev=1722.51, samples=2 00:11:22.621 lat (msec) : 4=0.36%, 10=5.53%, 20=63.73%, 50=24.06%, 100=4.19% 00:11:22.621 lat (msec) : 250=2.14% 00:11:22.621 cpu : usr=3.24%, sys=3.73%, ctx=360, majf=0, minf=1 00:11:22.621 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:22.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.621 issued rwts: total=2560,3006,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.621 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.621 job1: (groupid=0, jobs=1): err= 0: pid=1981828: Wed Jul 24 20:05:26 2024 00:11:22.621 read: IOPS=3008, BW=11.8MiB/s (12.3MB/s)(12.0MiB/1021msec) 00:11:22.621 slat (usec): min=2, max=18871, avg=145.03, stdev=1001.35 00:11:22.621 clat (usec): min=7082, max=56926, avg=18327.30, stdev=5847.29 00:11:22.621 lat (usec): min=7087, max=56937, avg=18472.33, stdev=5905.38 00:11:22.621 clat percentiles (usec): 00:11:22.621 | 1.00th=[ 9241], 5.00th=[13829], 10.00th=[14746], 20.00th=[15533], 00:11:22.621 | 30.00th=[15795], 40.00th=[16188], 50.00th=[16450], 60.00th=[16712], 00:11:22.621 | 70.00th=[16909], 80.00th=[19530], 90.00th=[27132], 95.00th=[31065], 00:11:22.621 | 99.00th=[37487], 99.50th=[41157], 99.90th=[56886], 99.95th=[56886], 00:11:22.621 | 99.99th=[56886] 00:11:22.621 write: IOPS=3288, BW=12.8MiB/s (13.5MB/s)(13.1MiB/1021msec); 0 zone resets 00:11:22.621 slat (usec): min=4, max=32939, avg=142.80, stdev=1007.47 00:11:22.621 clat (usec): min=3835, max=79025, avg=21804.07, stdev=12990.26 00:11:22.621 lat (usec): min=3849, max=79036, avg=21946.86, stdev=13059.86 00:11:22.621 clat percentiles (usec): 00:11:22.621 | 1.00th=[ 5407], 5.00th=[ 9896], 10.00th=[12780], 20.00th=[14746], 00:11:22.621 | 30.00th=[15795], 40.00th=[16581], 50.00th=[16909], 60.00th=[17957], 00:11:22.621 | 70.00th=[21627], 80.00th=[29230], 90.00th=[34341], 95.00th=[45351], 00:11:22.621 | 99.00th=[79168], 99.50th=[79168], 99.90th=[79168], 99.95th=[79168], 00:11:22.621 | 99.99th=[79168] 00:11:22.621 bw ( KiB/s): min=12120, max=13720, per=24.37%, avg=12920.00, stdev=1131.37, samples=2 00:11:22.621 iops : min= 3030, max= 3430, avg=3230.00, stdev=282.84, samples=2 00:11:22.621 lat (msec) : 4=0.33%, 10=4.57%, 20=67.47%, 50=25.19%, 100=2.44% 00:11:22.621 cpu : usr=4.71%, sys=4.41%, ctx=393, majf=0, minf=1 00:11:22.621 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:22.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.621 issued rwts: total=3072,3358,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.621 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.621 job2: (groupid=0, jobs=1): err= 0: pid=1981829: Wed Jul 24 20:05:26 2024 00:11:22.621 read: IOPS=3228, BW=12.6MiB/s (13.2MB/s)(12.7MiB/1006msec) 00:11:22.621 slat (usec): min=3, max=13796, avg=151.11, stdev=926.30 00:11:22.621 clat (usec): min=2334, max=80421, avg=18600.97, stdev=7950.67 00:11:22.621 lat (usec): min=7127, max=80427, avg=18752.08, stdev=8014.30 00:11:22.621 clat percentiles (usec): 00:11:22.621 | 1.00th=[ 7570], 5.00th=[10159], 10.00th=[13042], 20.00th=[15664], 00:11:22.621 | 30.00th=[16188], 40.00th=[16581], 50.00th=[16909], 60.00th=[18220], 00:11:22.621 | 70.00th=[18744], 80.00th=[19530], 90.00th=[22938], 95.00th=[33817], 00:11:22.621 | 99.00th=[55837], 99.50th=[68682], 99.90th=[69731], 99.95th=[69731], 00:11:22.621 | 99.99th=[80217] 00:11:22.621 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:11:22.621 slat (usec): min=4, max=31493, avg=128.50, stdev=688.54 00:11:22.621 clat (usec): min=1653, max=72128, avg=18655.64, stdev=7360.62 00:11:22.622 lat (usec): min=1667, max=72139, avg=18784.14, stdev=7381.39 00:11:22.622 clat percentiles (usec): 00:11:22.622 | 1.00th=[ 5604], 5.00th=[12387], 10.00th=[14746], 20.00th=[16712], 00:11:22.622 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:11:22.622 | 70.00th=[18220], 80.00th=[18744], 90.00th=[21103], 95.00th=[25297], 00:11:22.622 | 99.00th=[49546], 99.50th=[69731], 99.90th=[71828], 99.95th=[71828], 00:11:22.622 | 99.99th=[71828] 00:11:22.622 bw ( KiB/s): min=12312, max=16384, per=27.06%, avg=14348.00, stdev=2879.34, samples=2 00:11:22.622 iops : min= 3078, max= 4096, avg=3587.00, stdev=719.83, samples=2 00:11:22.622 lat (msec) : 2=0.16%, 4=0.25%, 10=3.21%, 20=81.12%, 50=14.49% 00:11:22.622 lat (msec) : 100=0.78% 00:11:22.622 cpu : usr=3.18%, sys=7.46%, ctx=492, majf=0, minf=1 00:11:22.622 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:22.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.622 issued rwts: total=3248,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.622 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.622 job3: (groupid=0, jobs=1): err= 0: pid=1981831: Wed Jul 24 20:05:26 2024 00:11:22.622 read: IOPS=3351, BW=13.1MiB/s (13.7MB/s)(13.2MiB/1008msec) 00:11:22.622 slat (usec): min=2, max=29220, avg=155.46, stdev=1071.46 00:11:22.622 clat (usec): min=2269, max=59374, avg=18936.98, stdev=7228.83 00:11:22.622 lat (usec): min=7429, max=59392, avg=19092.44, stdev=7299.89 00:11:22.622 clat percentiles (usec): 00:11:22.622 | 1.00th=[ 7767], 5.00th=[12518], 10.00th=[14222], 20.00th=[16188], 00:11:22.622 | 30.00th=[16319], 40.00th=[16581], 50.00th=[17171], 60.00th=[17695], 00:11:22.622 | 70.00th=[17957], 80.00th=[19792], 90.00th=[23462], 95.00th=[35390], 00:11:22.622 | 99.00th=[54789], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:11:22.622 | 99.99th=[59507] 00:11:22.622 write: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:11:22.622 slat (usec): min=4, max=23184, avg=122.25, stdev=651.65 00:11:22.622 clat (usec): min=8953, max=43748, avg=17826.16, stdev=3514.61 00:11:22.622 lat (usec): min=8989, max=43773, avg=17948.41, stdev=3546.14 00:11:22.622 clat percentiles (usec): 00:11:22.622 | 1.00th=[10159], 5.00th=[13960], 10.00th=[15270], 20.00th=[16057], 00:11:22.622 | 30.00th=[16712], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:11:22.622 | 70.00th=[18220], 80.00th=[19006], 90.00th=[20579], 95.00th=[23200], 00:11:22.622 | 99.00th=[33817], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:11:22.622 | 99.99th=[43779] 00:11:22.622 bw ( KiB/s): min=12408, max=16264, per=27.04%, avg=14336.00, stdev=2726.60, samples=2 00:11:22.622 iops : min= 3102, max= 4066, avg=3584.00, stdev=681.65, samples=2 00:11:22.622 lat (msec) : 4=0.01%, 10=0.85%, 20=83.37%, 50=15.14%, 100=0.63% 00:11:22.622 cpu : usr=3.18%, sys=5.46%, ctx=453, majf=0, minf=1 00:11:22.622 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:22.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.622 issued rwts: total=3378,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.622 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.622 00:11:22.622 Run status group 0 (all jobs): 00:11:22.622 READ: bw=46.9MiB/s (49.2MB/s), 9.79MiB/s-13.1MiB/s (10.3MB/s-13.7MB/s), io=47.9MiB (50.2MB), run=1006-1021msec 00:11:22.622 WRITE: bw=51.8MiB/s (54.3MB/s), 11.5MiB/s-13.9MiB/s (12.1MB/s-14.6MB/s), io=52.9MiB (55.4MB), run=1006-1021msec 00:11:22.622 00:11:22.622 Disk stats (read/write): 00:11:22.622 nvme0n1: ios=2237/2560, merge=0/0, ticks=41642/61641, in_queue=103283, util=86.27% 00:11:22.622 nvme0n2: ios=2594/2935, merge=0/0, ticks=33043/37366, in_queue=70409, util=97.86% 00:11:22.622 nvme0n3: ios=2580/3039, merge=0/0, ticks=26847/29050, in_queue=55897, util=96.32% 00:11:22.622 nvme0n4: ios=2635/3072, merge=0/0, ticks=30432/31113, in_queue=61545, util=94.37% 00:11:22.622 20:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:22.622 20:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1981964 00:11:22.622 20:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:22.622 20:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:22.622 [global] 00:11:22.622 thread=1 00:11:22.622 invalidate=1 00:11:22.622 rw=read 00:11:22.622 time_based=1 00:11:22.622 runtime=10 00:11:22.622 ioengine=libaio 00:11:22.622 direct=1 00:11:22.622 bs=4096 00:11:22.622 iodepth=1 00:11:22.622 norandommap=1 00:11:22.622 numjobs=1 00:11:22.622 00:11:22.622 [job0] 00:11:22.622 filename=/dev/nvme0n1 00:11:22.622 [job1] 00:11:22.622 filename=/dev/nvme0n2 00:11:22.622 [job2] 00:11:22.622 filename=/dev/nvme0n3 00:11:22.622 [job3] 00:11:22.622 filename=/dev/nvme0n4 00:11:22.622 Could not set queue depth (nvme0n1) 00:11:22.622 Could not set queue depth (nvme0n2) 00:11:22.622 Could not set queue depth (nvme0n3) 00:11:22.622 Could not set queue depth (nvme0n4) 00:11:22.880 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.880 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.880 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.880 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.880 fio-3.35 00:11:22.880 Starting 4 threads 00:11:25.408 20:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:25.974 20:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:25.974 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=10579968, buflen=4096 00:11:25.974 fio: pid=1982180, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:26.232 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=23941120, buflen=4096 00:11:26.232 fio: pid=1982179, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:26.232 20:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:26.232 20:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:26.491 20:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:26.491 20:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:26.491 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=18993152, buflen=4096 00:11:26.491 fio: pid=1982165, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:27.058 20:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.058 20:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:27.058 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=31838208, buflen=4096 00:11:27.058 fio: pid=1982178, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:11:27.058 00:11:27.058 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1982165: Wed Jul 24 20:05:30 2024 00:11:27.058 read: IOPS=1285, BW=5142KiB/s (5266kB/s)(18.1MiB/3607msec) 00:11:27.058 slat (usec): min=5, max=10268, avg=19.78, stdev=233.34 00:11:27.058 clat (usec): min=284, max=48424, avg=749.90, stdev=3765.17 00:11:27.058 lat (usec): min=291, max=48441, avg=769.68, stdev=3772.31 00:11:27.058 clat percentiles (usec): 00:11:27.058 | 1.00th=[ 310], 5.00th=[ 334], 10.00th=[ 347], 20.00th=[ 359], 00:11:27.058 | 30.00th=[ 367], 40.00th=[ 375], 50.00th=[ 383], 60.00th=[ 392], 00:11:27.058 | 70.00th=[ 404], 80.00th=[ 429], 90.00th=[ 490], 95.00th=[ 519], 00:11:27.058 | 99.00th=[ 898], 99.50th=[41157], 99.90th=[41681], 99.95th=[44827], 00:11:27.058 | 99.99th=[48497] 00:11:27.058 bw ( KiB/s): min= 96, max= 9992, per=24.84%, avg=5215.00, stdev=4107.92, samples=7 00:11:27.058 iops : min= 24, max= 2498, avg=1303.71, stdev=1026.99, samples=7 00:11:27.058 lat (usec) : 500=92.15%, 750=6.77%, 1000=0.09% 00:11:27.058 lat (msec) : 2=0.04%, 4=0.02%, 20=0.06%, 50=0.84% 00:11:27.058 cpu : usr=0.75%, sys=2.05%, ctx=4643, majf=0, minf=1 00:11:27.058 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.058 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.058 issued rwts: total=4638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.058 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.058 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1982178: Wed Jul 24 20:05:30 2024 00:11:27.058 read: IOPS=1958, BW=7832KiB/s (8020kB/s)(30.4MiB/3970msec) 00:11:27.058 slat (usec): min=6, max=10901, avg=13.10, stdev=179.94 00:11:27.058 clat (usec): min=261, max=41202, avg=495.09, stdev=2106.90 00:11:27.058 lat (usec): min=268, max=52002, avg=507.24, stdev=2161.51 00:11:27.058 clat percentiles (usec): 00:11:27.058 | 1.00th=[ 277], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 310], 00:11:27.058 | 30.00th=[ 338], 40.00th=[ 375], 50.00th=[ 392], 60.00th=[ 400], 00:11:27.058 | 70.00th=[ 408], 80.00th=[ 429], 90.00th=[ 469], 95.00th=[ 510], 00:11:27.058 | 99.00th=[ 660], 99.50th=[ 832], 99.90th=[41157], 99.95th=[41157], 00:11:27.058 | 99.99th=[41157] 00:11:27.058 bw ( KiB/s): min= 4298, max=11432, per=41.81%, avg=8778.57, stdev=2194.05, samples=7 00:11:27.058 iops : min= 1074, max= 2858, avg=2194.57, stdev=548.68, samples=7 00:11:27.058 lat (usec) : 500=94.11%, 750=5.30%, 1000=0.15% 00:11:27.058 lat (msec) : 2=0.13%, 4=0.01%, 10=0.01%, 50=0.27% 00:11:27.058 cpu : usr=1.18%, sys=3.25%, ctx=7777, majf=0, minf=1 00:11:27.058 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.058 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.058 issued rwts: total=7774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.058 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.058 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1982179: Wed Jul 24 20:05:30 2024 00:11:27.058 read: IOPS=1812, BW=7247KiB/s (7421kB/s)(22.8MiB/3226msec) 00:11:27.058 slat (nsec): min=6061, max=36288, avg=11094.28, stdev=3535.71 00:11:27.058 clat (usec): min=297, max=43941, avg=534.20, stdev=2263.83 00:11:27.058 lat (usec): min=305, max=43959, avg=545.29, stdev=2264.07 00:11:27.058 clat percentiles (usec): 00:11:27.059 | 1.00th=[ 322], 5.00th=[ 347], 10.00th=[ 363], 20.00th=[ 375], 00:11:27.059 | 30.00th=[ 383], 40.00th=[ 392], 50.00th=[ 400], 60.00th=[ 408], 00:11:27.059 | 70.00th=[ 416], 80.00th=[ 429], 90.00th=[ 453], 95.00th=[ 486], 00:11:27.059 | 99.00th=[ 578], 99.50th=[ 1037], 99.90th=[41157], 99.95th=[41157], 00:11:27.059 | 99.99th=[43779] 00:11:27.059 bw ( KiB/s): min= 1160, max= 9644, per=36.88%, avg=7743.33, stdev=3275.25, samples=6 00:11:27.059 iops : min= 290, max= 2411, avg=1935.83, stdev=818.81, samples=6 00:11:27.059 lat (usec) : 500=96.39%, 750=2.98%, 1000=0.10% 00:11:27.059 lat (msec) : 2=0.14%, 4=0.02%, 10=0.05%, 50=0.31% 00:11:27.059 cpu : usr=0.74%, sys=2.82%, ctx=5849, majf=0, minf=1 00:11:27.059 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.059 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.059 issued rwts: total=5846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.059 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.059 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1982180: Wed Jul 24 20:05:30 2024 00:11:27.059 read: IOPS=870, BW=3480KiB/s (3563kB/s)(10.1MiB/2969msec) 00:11:27.059 slat (nsec): min=7984, max=42762, avg=11899.90, stdev=3454.57 00:11:27.059 clat (usec): min=302, max=42032, avg=1123.45, stdev=5327.24 00:11:27.059 lat (usec): min=311, max=42050, avg=1135.35, stdev=5327.99 00:11:27.059 clat percentiles (usec): 00:11:27.059 | 1.00th=[ 318], 5.00th=[ 343], 10.00th=[ 359], 20.00th=[ 375], 00:11:27.059 | 30.00th=[ 383], 40.00th=[ 392], 50.00th=[ 396], 60.00th=[ 404], 00:11:27.059 | 70.00th=[ 420], 80.00th=[ 445], 90.00th=[ 506], 95.00th=[ 586], 00:11:27.059 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:27.059 | 99.99th=[42206] 00:11:27.059 bw ( KiB/s): min= 96, max= 9664, per=19.58%, avg=4111.60, stdev=4767.55, samples=5 00:11:27.059 iops : min= 24, max= 2416, avg=1027.80, stdev=1191.76, samples=5 00:11:27.059 lat (usec) : 500=89.01%, 750=9.13%, 1000=0.04% 00:11:27.059 lat (msec) : 10=0.04%, 50=1.74% 00:11:27.059 cpu : usr=0.67%, sys=1.52%, ctx=2585, majf=0, minf=1 00:11:27.059 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.059 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.059 issued rwts: total=2584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.059 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.059 00:11:27.059 Run status group 0 (all jobs): 00:11:27.059 READ: bw=20.5MiB/s (21.5MB/s), 3480KiB/s-7832KiB/s (3563kB/s-8020kB/s), io=81.4MiB (85.4MB), run=2969-3970msec 00:11:27.059 00:11:27.059 Disk stats (read/write): 00:11:27.059 nvme0n1: ios=4636/0, merge=0/0, ticks=3381/0, in_queue=3381, util=94.90% 00:11:27.059 nvme0n2: ios=7769/0, merge=0/0, ticks=3611/0, in_queue=3611, util=95.88% 00:11:27.059 nvme0n3: ios=5812/0, merge=0/0, ticks=2945/0, in_queue=2945, util=96.61% 00:11:27.059 nvme0n4: ios=2580/0, merge=0/0, ticks=2745/0, in_queue=2745, util=96.69% 00:11:27.317 20:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.317 20:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:27.576 20:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.576 20:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:27.835 20:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.835 20:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:28.402 20:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:28.402 20:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:28.660 20:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:28.660 20:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1981964 00:11:28.660 20:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:28.660 20:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:28.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.660 20:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:28.660 20:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:28.660 20:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:28.660 20:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.660 20:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:28.660 20:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.918 20:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:28.918 20:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:28.918 20:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:28.918 nvmf hotplug test: fio failed as expected 00:11:28.918 20:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:29.485 rmmod nvme_tcp 00:11:29.485 rmmod nvme_fabrics 00:11:29.485 rmmod nvme_keyring 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1979777 ']' 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1979777 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1979777 ']' 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1979777 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1979777 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1979777' 00:11:29.485 killing process with pid 1979777 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1979777 00:11:29.485 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1979777 00:11:30.059 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:30.059 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:30.059 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:30.059 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:30.059 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:30.059 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.059 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.059 20:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.993 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:31.993 00:11:31.993 real 0m28.004s 00:11:31.993 user 1m40.044s 00:11:31.993 sys 0m7.807s 00:11:31.993 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.993 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.993 ************************************ 00:11:31.993 END TEST nvmf_fio_target 00:11:31.993 ************************************ 00:11:31.993 20:05:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:31.993 20:05:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:31.993 20:05:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:31.993 20:05:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:31.993 ************************************ 00:11:31.993 START TEST nvmf_bdevio 00:11:31.993 ************************************ 00:11:31.993 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:31.993 * Looking for test storage... 00:11:31.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.993 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.993 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:31.993 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:11:31.994 20:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:35.284 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.284 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:11:35.284 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:35.284 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:35.284 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:35.284 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:35.284 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:35.284 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:11:35.284 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:35.284 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:11:35.284 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:11:35.284 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:11:35.284 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:11:35.284 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:11:35.284 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:11:35.284 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:35.285 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:35.285 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:35.285 Found net devices under 0000:84:00.0: cvl_0_0 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:35.285 Found net devices under 0000:84:00.1: cvl_0_1 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:35.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:11:35.285 00:11:35.285 --- 10.0.0.2 ping statistics --- 00:11:35.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.285 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:11:35.285 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:11:35.285 00:11:35.285 --- 10.0.0.1 ping statistics --- 00:11:35.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.285 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:11:35.286 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.286 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:11:35.286 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:35.286 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.286 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:35.286 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:35.286 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.286 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:35.286 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:35.286 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:35.286 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:35.286 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:35.286 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:35.286 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1984960 00:11:35.286 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:35.286 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1984960 00:11:35.286 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1984960 ']' 00:11:35.286 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.286 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:35.286 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.286 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:35.286 20:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:35.286 [2024-07-24 20:05:38.680148] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:11:35.286 [2024-07-24 20:05:38.680262] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.286 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.286 [2024-07-24 20:05:38.802751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.286 [2024-07-24 20:05:39.024211] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.286 [2024-07-24 20:05:39.024330] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.286 [2024-07-24 20:05:39.024366] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.286 [2024-07-24 20:05:39.024400] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.286 [2024-07-24 20:05:39.024442] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.286 [2024-07-24 20:05:39.024626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:35.286 [2024-07-24 20:05:39.024746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:35.286 [2024-07-24 20:05:39.024827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:35.286 [2024-07-24 20:05:39.024832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:36.221 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:36.221 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:36.221 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:36.221 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:36.221 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:36.221 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.221 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:36.221 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.221 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:36.221 [2024-07-24 20:05:39.851861] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.221 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.221 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:36.222 Malloc0 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:36.222 [2024-07-24 20:05:39.907638] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:36.222 { 00:11:36.222 "params": { 00:11:36.222 "name": "Nvme$subsystem", 00:11:36.222 "trtype": "$TEST_TRANSPORT", 00:11:36.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:36.222 "adrfam": "ipv4", 00:11:36.222 "trsvcid": "$NVMF_PORT", 00:11:36.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:36.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:36.222 "hdgst": ${hdgst:-false}, 00:11:36.222 "ddgst": ${ddgst:-false} 00:11:36.222 }, 00:11:36.222 "method": "bdev_nvme_attach_controller" 00:11:36.222 } 00:11:36.222 EOF 00:11:36.222 )") 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:36.222 20:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:36.222 "params": { 00:11:36.222 "name": "Nvme1", 00:11:36.222 "trtype": "tcp", 00:11:36.222 "traddr": "10.0.0.2", 00:11:36.222 "adrfam": "ipv4", 00:11:36.222 "trsvcid": "4420", 00:11:36.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:36.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:36.222 "hdgst": false, 00:11:36.222 "ddgst": false 00:11:36.222 }, 00:11:36.222 "method": "bdev_nvme_attach_controller" 00:11:36.222 }' 00:11:36.222 [2024-07-24 20:05:39.962501] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:11:36.222 [2024-07-24 20:05:39.962587] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1985125 ] 00:11:36.480 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.480 [2024-07-24 20:05:40.056262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:36.480 [2024-07-24 20:05:40.195837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.480 [2024-07-24 20:05:40.195898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.480 [2024-07-24 20:05:40.195903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.739 I/O targets: 00:11:36.739 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:36.739 00:11:36.739 00:11:36.739 CUnit - A unit testing framework for C - Version 2.1-3 00:11:36.739 http://cunit.sourceforge.net/ 00:11:36.739 00:11:36.739 00:11:36.739 Suite: bdevio tests on: Nvme1n1 00:11:36.739 Test: blockdev write read block ...passed 00:11:36.997 Test: blockdev write zeroes read block ...passed 00:11:36.997 Test: blockdev write zeroes read no split ...passed 00:11:36.997 Test: blockdev write zeroes read split ...passed 00:11:36.997 Test: blockdev write zeroes read split partial ...passed 00:11:36.997 Test: blockdev reset ...[2024-07-24 20:05:40.565132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:36.997 [2024-07-24 20:05:40.565263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f8bd0 (9): Bad file descriptor 00:11:36.997 [2024-07-24 20:05:40.619951] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:36.997 passed 00:11:36.997 Test: blockdev write read 8 blocks ...passed 00:11:36.997 Test: blockdev write read size > 128k ...passed 00:11:36.997 Test: blockdev write read invalid size ...passed 00:11:36.997 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:36.997 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:36.997 Test: blockdev write read max offset ...passed 00:11:37.256 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:37.256 Test: blockdev writev readv 8 blocks ...passed 00:11:37.256 Test: blockdev writev readv 30 x 1block ...passed 00:11:37.256 Test: blockdev writev readv block ...passed 00:11:37.256 Test: blockdev writev readv size > 128k ...passed 00:11:37.256 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:37.256 Test: blockdev comparev and writev ...[2024-07-24 20:05:40.839027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:37.256 [2024-07-24 20:05:40.839075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:37.256 [2024-07-24 20:05:40.839114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:37.256 [2024-07-24 20:05:40.839136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:37.256 [2024-07-24 20:05:40.839754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:37.256 [2024-07-24 20:05:40.839800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:37.256 [2024-07-24 20:05:40.839834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:37.256 [2024-07-24 20:05:40.839865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:37.256 [2024-07-24 20:05:40.840445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:37.256 [2024-07-24 20:05:40.840490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:37.256 [2024-07-24 20:05:40.840521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:37.256 [2024-07-24 20:05:40.840543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:37.256 [2024-07-24 20:05:40.841049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:37.256 [2024-07-24 20:05:40.841082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:37.256 [2024-07-24 20:05:40.841110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:37.256 [2024-07-24 20:05:40.841132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:37.256 passed 00:11:37.256 Test: blockdev nvme passthru rw ...passed 00:11:37.256 Test: blockdev nvme passthru vendor specific ...[2024-07-24 20:05:40.924812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:37.256 [2024-07-24 20:05:40.924848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:37.256 [2024-07-24 20:05:40.925083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:37.256 [2024-07-24 20:05:40.925115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:37.256 [2024-07-24 20:05:40.925332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:37.256 [2024-07-24 20:05:40.925363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:37.256 [2024-07-24 20:05:40.925622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:37.256 [2024-07-24 20:05:40.925654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:37.256 passed 00:11:37.256 Test: blockdev nvme admin passthru ...passed 00:11:37.256 Test: blockdev copy ...passed 00:11:37.256 00:11:37.256 Run Summary: Type Total Ran Passed Failed Inactive 00:11:37.256 suites 1 1 n/a 0 0 00:11:37.256 tests 23 23 23 0 0 00:11:37.256 asserts 152 152 152 0 n/a 00:11:37.256 00:11:37.256 Elapsed time = 1.069 seconds 00:11:37.514 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:37.514 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.514 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:37.514 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.514 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:37.514 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:37.514 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:37.514 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:37.514 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:37.514 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:37.514 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:37.514 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:37.514 rmmod nvme_tcp 00:11:37.514 rmmod nvme_fabrics 00:11:37.514 rmmod nvme_keyring 00:11:37.773 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:37.773 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:37.773 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:37.773 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1984960 ']' 00:11:37.773 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1984960 00:11:37.773 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1984960 ']' 00:11:37.773 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1984960 00:11:37.773 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:37.773 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:37.774 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1984960 00:11:37.774 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:37.774 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:37.774 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1984960' 00:11:37.774 killing process with pid 1984960 00:11:37.774 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1984960 00:11:37.774 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1984960 00:11:38.034 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:38.034 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:38.034 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:38.034 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:38.034 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:38.034 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.034 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.034 20:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:40.569 00:11:40.569 real 0m8.163s 00:11:40.569 user 0m14.082s 00:11:40.569 sys 0m2.911s 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:40.569 ************************************ 00:11:40.569 END TEST nvmf_bdevio 00:11:40.569 ************************************ 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:40.569 00:11:40.569 real 4m34.289s 00:11:40.569 user 11m44.402s 00:11:40.569 sys 1m23.823s 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:40.569 ************************************ 00:11:40.569 END TEST nvmf_target_core 00:11:40.569 ************************************ 00:11:40.569 20:05:43 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:40.569 20:05:43 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:40.569 20:05:43 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.569 20:05:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:40.569 ************************************ 00:11:40.569 START TEST nvmf_target_extra 00:11:40.569 ************************************ 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:40.569 * Looking for test storage... 00:11:40.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.569 20:05:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.570 20:05:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:40.570 20:05:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.570 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:40.570 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:40.570 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:40.570 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:40.570 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.570 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.570 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:40.570 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:40.570 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:40.570 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:40.570 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:40.570 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:40.570 20:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:40.570 20:05:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:40.570 20:05:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:40.570 ************************************ 00:11:40.570 START TEST nvmf_example 00:11:40.570 ************************************ 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:40.570 * Looking for test storage... 00:11:40.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:40.570 20:05:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:43.107 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:43.107 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:43.107 Found net devices under 0000:84:00.0: cvl_0_0 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:43.107 Found net devices under 0000:84:00.1: cvl_0_1 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.107 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.108 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.108 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:43.108 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.108 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.108 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:43.108 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.108 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.108 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:43.108 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:43.108 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.108 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.366 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.367 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.367 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:43.367 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.367 20:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:43.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:11:43.367 00:11:43.367 --- 10.0.0.2 ping statistics --- 00:11:43.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.367 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:11:43.367 00:11:43.367 --- 10.0.0.1 ping statistics --- 00:11:43.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.367 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1987395 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1987395 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1987395 ']' 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:43.367 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:43.625 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.883 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:43.883 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:43.883 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:43.883 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:43.883 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:43.883 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:43.883 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.883 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:43.883 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.883 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:43.883 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.883 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:43.883 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.883 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:43.883 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:43.883 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.883 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:43.883 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.884 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:43.884 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:43.884 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.884 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:43.884 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.884 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:43.884 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.884 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:43.884 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.884 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:43.884 20:05:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:43.884 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.082 Initializing NVMe Controllers 00:11:56.082 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:56.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:56.082 Initialization complete. Launching workers. 00:11:56.082 ======================================================== 00:11:56.082 Latency(us) 00:11:56.082 Device Information : IOPS MiB/s Average min max 00:11:56.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12438.00 48.59 5145.06 884.01 16044.16 00:11:56.083 ======================================================== 00:11:56.083 Total : 12438.00 48.59 5145.06 884.01 16044.16 00:11:56.083 00:11:56.083 20:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:56.083 20:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:56.083 20:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:56.083 20:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:56.083 20:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:56.083 20:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:56.083 20:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:56.083 20:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:56.083 rmmod nvme_tcp 00:11:56.083 rmmod nvme_fabrics 00:11:56.083 rmmod nvme_keyring 00:11:56.083 20:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:56.083 20:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:56.083 20:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:56.083 20:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1987395 ']' 00:11:56.083 20:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1987395 00:11:56.083 20:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1987395 ']' 00:11:56.083 20:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1987395 00:11:56.083 20:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:56.083 20:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:56.083 20:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1987395 00:11:56.083 20:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:56.083 20:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:56.083 20:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1987395' 00:11:56.083 killing process with pid 1987395 00:11:56.083 20:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1987395 00:11:56.083 20:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1987395 00:11:56.083 nvmf threads initialize successfully 00:11:56.083 bdev subsystem init successfully 00:11:56.083 created a nvmf target service 00:11:56.083 create targets's poll groups done 00:11:56.083 all subsystems of target started 00:11:56.083 nvmf target is running 00:11:56.083 all subsystems of target stopped 00:11:56.083 destroy targets's poll groups done 00:11:56.083 destroyed the nvmf target service 00:11:56.083 bdev subsystem finish successfully 00:11:56.083 nvmf threads destroy successfully 00:11:56.083 20:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:56.083 20:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:56.083 20:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:56.083 20:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:56.083 20:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:56.083 20:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.083 20:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.083 20:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:57.022 00:11:57.022 real 0m16.437s 00:11:57.022 user 0m43.521s 00:11:57.022 sys 0m4.014s 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:57.022 ************************************ 00:11:57.022 END TEST nvmf_example 00:11:57.022 ************************************ 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:57.022 ************************************ 00:11:57.022 START TEST nvmf_filesystem 00:11:57.022 ************************************ 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:57.022 * Looking for test storage... 00:11:57.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:57.022 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:57.023 #define SPDK_CONFIG_H 00:11:57.023 #define SPDK_CONFIG_APPS 1 00:11:57.023 #define SPDK_CONFIG_ARCH native 00:11:57.023 #undef SPDK_CONFIG_ASAN 00:11:57.023 #undef SPDK_CONFIG_AVAHI 00:11:57.023 #undef SPDK_CONFIG_CET 00:11:57.023 #define SPDK_CONFIG_COVERAGE 1 00:11:57.023 #define SPDK_CONFIG_CROSS_PREFIX 00:11:57.023 #undef SPDK_CONFIG_CRYPTO 00:11:57.023 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:57.023 #undef SPDK_CONFIG_CUSTOMOCF 00:11:57.023 #undef SPDK_CONFIG_DAOS 00:11:57.023 #define SPDK_CONFIG_DAOS_DIR 00:11:57.023 #define SPDK_CONFIG_DEBUG 1 00:11:57.023 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:57.023 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:57.023 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:57.023 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:57.023 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:57.023 #undef SPDK_CONFIG_DPDK_UADK 00:11:57.023 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:57.023 #define SPDK_CONFIG_EXAMPLES 1 00:11:57.023 #undef SPDK_CONFIG_FC 00:11:57.023 #define SPDK_CONFIG_FC_PATH 00:11:57.023 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:57.023 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:57.023 #undef SPDK_CONFIG_FUSE 00:11:57.023 #undef SPDK_CONFIG_FUZZER 00:11:57.023 #define SPDK_CONFIG_FUZZER_LIB 00:11:57.023 #undef SPDK_CONFIG_GOLANG 00:11:57.023 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:57.023 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:57.023 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:57.023 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:57.023 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:57.023 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:57.023 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:57.023 #define SPDK_CONFIG_IDXD 1 00:11:57.023 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:57.023 #undef SPDK_CONFIG_IPSEC_MB 00:11:57.023 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:57.023 #define SPDK_CONFIG_ISAL 1 00:11:57.023 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:57.023 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:57.023 #define SPDK_CONFIG_LIBDIR 00:11:57.023 #undef SPDK_CONFIG_LTO 00:11:57.023 #define SPDK_CONFIG_MAX_LCORES 128 00:11:57.023 #define SPDK_CONFIG_NVME_CUSE 1 00:11:57.023 #undef SPDK_CONFIG_OCF 00:11:57.023 #define SPDK_CONFIG_OCF_PATH 00:11:57.023 #define SPDK_CONFIG_OPENSSL_PATH 00:11:57.023 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:57.023 #define SPDK_CONFIG_PGO_DIR 00:11:57.023 #undef SPDK_CONFIG_PGO_USE 00:11:57.023 #define SPDK_CONFIG_PREFIX /usr/local 00:11:57.023 #undef SPDK_CONFIG_RAID5F 00:11:57.023 #undef SPDK_CONFIG_RBD 00:11:57.023 #define SPDK_CONFIG_RDMA 1 00:11:57.023 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:57.023 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:57.023 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:57.023 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:57.023 #define SPDK_CONFIG_SHARED 1 00:11:57.023 #undef SPDK_CONFIG_SMA 00:11:57.023 #define SPDK_CONFIG_TESTS 1 00:11:57.023 #undef SPDK_CONFIG_TSAN 00:11:57.023 #define SPDK_CONFIG_UBLK 1 00:11:57.023 #define SPDK_CONFIG_UBSAN 1 00:11:57.023 #undef SPDK_CONFIG_UNIT_TESTS 00:11:57.023 #undef SPDK_CONFIG_URING 00:11:57.023 #define SPDK_CONFIG_URING_PATH 00:11:57.023 #undef SPDK_CONFIG_URING_ZNS 00:11:57.023 #undef SPDK_CONFIG_USDT 00:11:57.023 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:57.023 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:57.023 #define SPDK_CONFIG_VFIO_USER 1 00:11:57.023 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:57.023 #define SPDK_CONFIG_VHOST 1 00:11:57.023 #define SPDK_CONFIG_VIRTIO 1 00:11:57.023 #undef SPDK_CONFIG_VTUNE 00:11:57.023 #define SPDK_CONFIG_VTUNE_DIR 00:11:57.023 #define SPDK_CONFIG_WERROR 1 00:11:57.023 #define SPDK_CONFIG_WPDK_DIR 00:11:57.023 #undef SPDK_CONFIG_XNVME 00:11:57.023 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:57.023 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:57.024 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:11:57.025 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j48 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 1989100 ]] 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 1989100 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.ozttGK 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ozttGK/tests/target /tmp/spdk.ozttGK 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=949354496 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4335075328 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=38673842176 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=45083295744 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=6409453568 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=22531727360 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=22541647872 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=9920512 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=8994226176 00:11:57.026 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=9016659968 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=22433792 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=22540906496 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=22541647872 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=741376 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=4508323840 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=4508327936 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:11:57.027 * Looking for test storage... 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=38673842176 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=8624046080 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.027 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:11:57.028 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:57.028 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:57.028 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.028 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.028 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.028 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:57.028 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:57.028 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:57.028 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:57.028 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:57.028 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:57.028 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:57.028 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.028 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:57.028 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:57.028 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:57.028 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.028 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.028 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.028 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:57.028 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:57.028 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:57.028 20:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:00.351 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.351 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:00.351 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:00.351 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:00.351 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:00.351 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:00.351 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:00.351 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:00.351 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:00.351 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:12:00.351 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:00.351 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:00.352 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:00.352 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:00.352 Found net devices under 0000:84:00.0: cvl_0_0 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:00.352 Found net devices under 0000:84:00.1: cvl_0_1 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:00.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:12:00.352 00:12:00.352 --- 10.0.0.2 ping statistics --- 00:12:00.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.352 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:00.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:12:00.352 00:12:00.352 --- 10.0.0.1 ping statistics --- 00:12:00.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.352 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:00.352 ************************************ 00:12:00.352 START TEST nvmf_filesystem_no_in_capsule 00:12:00.352 ************************************ 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:12:00.352 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:00.353 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:00.353 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:00.353 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:00.353 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.353 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1990944 00:12:00.353 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:00.353 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1990944 00:12:00.353 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1990944 ']' 00:12:00.353 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.353 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:00.353 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.353 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:00.353 20:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.353 [2024-07-24 20:06:03.669134] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:12:00.353 [2024-07-24 20:06:03.669233] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.353 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.353 [2024-07-24 20:06:03.761668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.353 [2024-07-24 20:06:03.921489] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.353 [2024-07-24 20:06:03.921604] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.353 [2024-07-24 20:06:03.921640] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.353 [2024-07-24 20:06:03.921670] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.353 [2024-07-24 20:06:03.921696] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.353 [2024-07-24 20:06:03.921871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.353 [2024-07-24 20:06:03.921940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.353 [2024-07-24 20:06:03.922002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.353 [2024-07-24 20:06:03.922006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.353 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:00.353 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.612 [2024-07-24 20:06:04.170023] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.612 Malloc1 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.612 [2024-07-24 20:06:04.390354] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:00.612 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:00.870 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:00.870 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.870 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.870 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.870 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:00.870 { 00:12:00.870 "name": "Malloc1", 00:12:00.870 "aliases": [ 00:12:00.870 "89ff7139-110b-4fe4-b7dc-de2b18756510" 00:12:00.870 ], 00:12:00.870 "product_name": "Malloc disk", 00:12:00.870 "block_size": 512, 00:12:00.870 "num_blocks": 1048576, 00:12:00.870 "uuid": "89ff7139-110b-4fe4-b7dc-de2b18756510", 00:12:00.870 "assigned_rate_limits": { 00:12:00.870 "rw_ios_per_sec": 0, 00:12:00.870 "rw_mbytes_per_sec": 0, 00:12:00.870 "r_mbytes_per_sec": 0, 00:12:00.870 "w_mbytes_per_sec": 0 00:12:00.870 }, 00:12:00.870 "claimed": true, 00:12:00.870 "claim_type": "exclusive_write", 00:12:00.870 "zoned": false, 00:12:00.870 "supported_io_types": { 00:12:00.870 "read": true, 00:12:00.870 "write": true, 00:12:00.870 "unmap": true, 00:12:00.870 "flush": true, 00:12:00.870 "reset": true, 00:12:00.870 "nvme_admin": false, 00:12:00.870 "nvme_io": false, 00:12:00.870 "nvme_io_md": false, 00:12:00.870 "write_zeroes": true, 00:12:00.870 "zcopy": true, 00:12:00.870 "get_zone_info": false, 00:12:00.870 "zone_management": false, 00:12:00.870 "zone_append": false, 00:12:00.870 "compare": false, 00:12:00.870 "compare_and_write": false, 00:12:00.870 "abort": true, 00:12:00.870 "seek_hole": false, 00:12:00.870 "seek_data": false, 00:12:00.870 "copy": true, 00:12:00.870 "nvme_iov_md": false 00:12:00.870 }, 00:12:00.870 "memory_domains": [ 00:12:00.870 { 00:12:00.870 "dma_device_id": "system", 00:12:00.870 "dma_device_type": 1 00:12:00.870 }, 00:12:00.871 { 00:12:00.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.871 "dma_device_type": 2 00:12:00.871 } 00:12:00.871 ], 00:12:00.871 "driver_specific": {} 00:12:00.871 } 00:12:00.871 ]' 00:12:00.871 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:00.871 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:00.871 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:00.871 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:00.871 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:00.871 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:00.871 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:00.871 20:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:01.437 20:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:01.437 20:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:01.437 20:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.437 20:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:01.437 20:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:03.966 20:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:03.966 20:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:03.966 20:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:03.966 20:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:03.966 20:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.966 20:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:03.966 20:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:03.966 20:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:03.966 20:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:03.966 20:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:03.966 20:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:03.966 20:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:03.966 20:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:03.966 20:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:03.966 20:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:03.966 20:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:03.966 20:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:03.966 20:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:04.224 20:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:05.158 20:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:05.158 20:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:05.158 20:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:05.158 20:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:05.158 20:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.158 ************************************ 00:12:05.158 START TEST filesystem_ext4 00:12:05.158 ************************************ 00:12:05.158 20:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:05.158 20:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:05.158 20:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:05.158 20:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:05.158 20:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:05.158 20:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:05.158 20:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:05.158 20:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:05.158 20:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:05.158 20:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:05.158 20:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:05.158 mke2fs 1.46.5 (30-Dec-2021) 00:12:05.417 Discarding device blocks: 0/522240 done 00:12:05.417 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:05.417 Filesystem UUID: 70ca31c1-e98a-4bfa-859e-e17060aaa36f 00:12:05.417 Superblock backups stored on blocks: 00:12:05.417 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:05.417 00:12:05.417 Allocating group tables: 0/64 done 00:12:05.417 Writing inode tables: 0/64 done 00:12:05.417 Creating journal (8192 blocks): done 00:12:05.417 Writing superblocks and filesystem accounting information: 0/64 done 00:12:05.417 00:12:05.417 20:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:05.417 20:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:06.351 20:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1990944 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:06.351 00:12:06.351 real 0m1.161s 00:12:06.351 user 0m0.014s 00:12:06.351 sys 0m0.061s 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:06.351 ************************************ 00:12:06.351 END TEST filesystem_ext4 00:12:06.351 ************************************ 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.351 ************************************ 00:12:06.351 START TEST filesystem_btrfs 00:12:06.351 ************************************ 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:06.351 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:06.919 btrfs-progs v6.6.2 00:12:06.919 See https://btrfs.readthedocs.io for more information. 00:12:06.919 00:12:06.919 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:06.919 NOTE: several default settings have changed in version 5.15, please make sure 00:12:06.919 this does not affect your deployments: 00:12:06.919 - DUP for metadata (-m dup) 00:12:06.919 - enabled no-holes (-O no-holes) 00:12:06.919 - enabled free-space-tree (-R free-space-tree) 00:12:06.919 00:12:06.919 Label: (null) 00:12:06.919 UUID: 82897fab-d767-4ab1-9cd4-b38fc7e5892b 00:12:06.919 Node size: 16384 00:12:06.919 Sector size: 4096 00:12:06.919 Filesystem size: 510.00MiB 00:12:06.919 Block group profiles: 00:12:06.919 Data: single 8.00MiB 00:12:06.919 Metadata: DUP 32.00MiB 00:12:06.919 System: DUP 8.00MiB 00:12:06.919 SSD detected: yes 00:12:06.919 Zoned device: no 00:12:06.919 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:06.919 Runtime features: free-space-tree 00:12:06.919 Checksum: crc32c 00:12:06.919 Number of devices: 1 00:12:06.919 Devices: 00:12:06.919 ID SIZE PATH 00:12:06.919 1 510.00MiB /dev/nvme0n1p1 00:12:06.919 00:12:06.919 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:06.919 20:06:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:07.485 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:07.485 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:07.485 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:07.485 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:07.485 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:07.485 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:07.744 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1990944 00:12:07.744 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:07.744 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:07.744 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:07.744 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:07.744 00:12:07.744 real 0m1.186s 00:12:07.744 user 0m0.027s 00:12:07.744 sys 0m0.118s 00:12:07.744 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.744 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:07.744 ************************************ 00:12:07.744 END TEST filesystem_btrfs 00:12:07.744 ************************************ 00:12:07.744 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:07.744 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:07.744 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.744 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.744 ************************************ 00:12:07.744 START TEST filesystem_xfs 00:12:07.744 ************************************ 00:12:07.744 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:07.744 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:07.744 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:07.744 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:07.744 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:07.744 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:07.744 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:07.744 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:12:07.744 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:07.744 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:07.744 20:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:07.744 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:07.744 = sectsz=512 attr=2, projid32bit=1 00:12:07.744 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:07.744 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:07.744 data = bsize=4096 blocks=130560, imaxpct=25 00:12:07.744 = sunit=0 swidth=0 blks 00:12:07.744 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:07.744 log =internal log bsize=4096 blocks=16384, version=2 00:12:07.744 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:07.744 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:09.118 Discarding blocks...Done. 00:12:09.118 20:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:09.118 20:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1990944 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:11.028 00:12:11.028 real 0m3.094s 00:12:11.028 user 0m0.019s 00:12:11.028 sys 0m0.065s 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:11.028 ************************************ 00:12:11.028 END TEST filesystem_xfs 00:12:11.028 ************************************ 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:11.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1990944 00:12:11.028 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1990944 ']' 00:12:11.029 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1990944 00:12:11.029 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:11.029 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:11.029 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1990944 00:12:11.029 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:11.029 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:11.029 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1990944' 00:12:11.029 killing process with pid 1990944 00:12:11.029 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1990944 00:12:11.029 20:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1990944 00:12:11.595 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:11.595 00:12:11.595 real 0m11.749s 00:12:11.595 user 0m44.710s 00:12:11.595 sys 0m1.798s 00:12:11.595 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:11.595 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.595 ************************************ 00:12:11.595 END TEST nvmf_filesystem_no_in_capsule 00:12:11.595 ************************************ 00:12:11.855 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:11.855 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:11.855 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:11.855 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:11.855 ************************************ 00:12:11.855 START TEST nvmf_filesystem_in_capsule 00:12:11.855 ************************************ 00:12:11.855 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:12:11.855 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:11.855 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:11.855 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:11.855 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:11.855 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.855 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1992992 00:12:11.855 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.855 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1992992 00:12:11.855 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1992992 ']' 00:12:11.855 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.855 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:11.855 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.855 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:11.855 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.855 [2024-07-24 20:06:15.496757] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:12:11.855 [2024-07-24 20:06:15.496857] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.855 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.855 [2024-07-24 20:06:15.612006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.114 [2024-07-24 20:06:15.815887] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.114 [2024-07-24 20:06:15.815996] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.114 [2024-07-24 20:06:15.816033] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.114 [2024-07-24 20:06:15.816063] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.114 [2024-07-24 20:06:15.816090] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.114 [2024-07-24 20:06:15.816247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.114 [2024-07-24 20:06:15.816282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.114 [2024-07-24 20:06:15.816341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:12.114 [2024-07-24 20:06:15.816345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.373 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:12.373 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:12.373 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:12.373 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:12.373 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.373 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.373 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:12.373 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:12.374 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.374 20:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.374 [2024-07-24 20:06:16.006870] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.374 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.374 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:12.374 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.374 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.633 Malloc1 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.633 [2024-07-24 20:06:16.227577] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:12.633 { 00:12:12.633 "name": "Malloc1", 00:12:12.633 "aliases": [ 00:12:12.633 "50214d2a-b378-4d70-87e8-33f929dae579" 00:12:12.633 ], 00:12:12.633 "product_name": "Malloc disk", 00:12:12.633 "block_size": 512, 00:12:12.633 "num_blocks": 1048576, 00:12:12.633 "uuid": "50214d2a-b378-4d70-87e8-33f929dae579", 00:12:12.633 "assigned_rate_limits": { 00:12:12.633 "rw_ios_per_sec": 0, 00:12:12.633 "rw_mbytes_per_sec": 0, 00:12:12.633 "r_mbytes_per_sec": 0, 00:12:12.633 "w_mbytes_per_sec": 0 00:12:12.633 }, 00:12:12.633 "claimed": true, 00:12:12.633 "claim_type": "exclusive_write", 00:12:12.633 "zoned": false, 00:12:12.633 "supported_io_types": { 00:12:12.633 "read": true, 00:12:12.633 "write": true, 00:12:12.633 "unmap": true, 00:12:12.633 "flush": true, 00:12:12.633 "reset": true, 00:12:12.633 "nvme_admin": false, 00:12:12.633 "nvme_io": false, 00:12:12.633 "nvme_io_md": false, 00:12:12.633 "write_zeroes": true, 00:12:12.633 "zcopy": true, 00:12:12.633 "get_zone_info": false, 00:12:12.633 "zone_management": false, 00:12:12.633 "zone_append": false, 00:12:12.633 "compare": false, 00:12:12.633 "compare_and_write": false, 00:12:12.633 "abort": true, 00:12:12.633 "seek_hole": false, 00:12:12.633 "seek_data": false, 00:12:12.633 "copy": true, 00:12:12.633 "nvme_iov_md": false 00:12:12.633 }, 00:12:12.633 "memory_domains": [ 00:12:12.633 { 00:12:12.633 "dma_device_id": "system", 00:12:12.633 "dma_device_type": 1 00:12:12.633 }, 00:12:12.633 { 00:12:12.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.633 "dma_device_type": 2 00:12:12.633 } 00:12:12.633 ], 00:12:12.633 "driver_specific": {} 00:12:12.633 } 00:12:12.633 ]' 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:12.633 20:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:13.568 20:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:13.568 20:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:13.568 20:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:13.568 20:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:13.568 20:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:15.470 20:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:15.470 20:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:15.470 20:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:15.470 20:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:15.470 20:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:15.470 20:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:15.470 20:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:15.470 20:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:15.470 20:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:15.470 20:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:15.470 20:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:15.470 20:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:15.470 20:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:15.470 20:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:15.470 20:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:15.470 20:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:15.470 20:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:15.728 20:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:16.324 20:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:17.258 20:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:17.258 20:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:17.258 20:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:17.258 20:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:17.258 20:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.258 ************************************ 00:12:17.258 START TEST filesystem_in_capsule_ext4 00:12:17.258 ************************************ 00:12:17.258 20:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:17.258 20:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:17.258 20:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:17.258 20:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:17.258 20:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:17.258 20:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:17.258 20:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:17.258 20:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:17.258 20:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:17.258 20:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:17.258 20:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:17.258 mke2fs 1.46.5 (30-Dec-2021) 00:12:17.516 Discarding device blocks: 0/522240 done 00:12:17.516 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:17.516 Filesystem UUID: ccd8cd45-a29c-4268-8580-8f5a9e3cb14c 00:12:17.516 Superblock backups stored on blocks: 00:12:17.516 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:17.516 00:12:17.516 Allocating group tables: 0/64 done 00:12:17.516 Writing inode tables: 0/64 done 00:12:20.801 Creating journal (8192 blocks): done 00:12:20.801 Writing superblocks and filesystem accounting information: 0/64 done 00:12:20.801 00:12:20.801 20:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:20.801 20:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:21.367 20:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:21.367 20:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:21.368 20:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:21.368 20:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:21.368 20:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:21.368 20:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:21.368 20:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1992992 00:12:21.368 20:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:21.368 20:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:21.368 20:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:21.368 20:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:21.368 00:12:21.368 real 0m4.068s 00:12:21.368 user 0m0.021s 00:12:21.368 sys 0m0.058s 00:12:21.368 20:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.368 20:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:21.368 ************************************ 00:12:21.368 END TEST filesystem_in_capsule_ext4 00:12:21.368 ************************************ 00:12:21.368 20:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:21.368 20:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:21.368 20:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.368 20:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.368 ************************************ 00:12:21.368 START TEST filesystem_in_capsule_btrfs 00:12:21.368 ************************************ 00:12:21.368 20:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:21.368 20:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:21.368 20:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:21.368 20:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:21.368 20:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:21.368 20:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:21.368 20:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:21.368 20:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:21.368 20:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:21.368 20:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:21.368 20:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:21.626 btrfs-progs v6.6.2 00:12:21.626 See https://btrfs.readthedocs.io for more information. 00:12:21.626 00:12:21.626 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:21.626 NOTE: several default settings have changed in version 5.15, please make sure 00:12:21.626 this does not affect your deployments: 00:12:21.626 - DUP for metadata (-m dup) 00:12:21.626 - enabled no-holes (-O no-holes) 00:12:21.626 - enabled free-space-tree (-R free-space-tree) 00:12:21.626 00:12:21.626 Label: (null) 00:12:21.626 UUID: 7ad2ff9e-cc47-4e91-8cc2-c0506986b33f 00:12:21.626 Node size: 16384 00:12:21.626 Sector size: 4096 00:12:21.626 Filesystem size: 510.00MiB 00:12:21.626 Block group profiles: 00:12:21.626 Data: single 8.00MiB 00:12:21.626 Metadata: DUP 32.00MiB 00:12:21.626 System: DUP 8.00MiB 00:12:21.626 SSD detected: yes 00:12:21.626 Zoned device: no 00:12:21.626 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:21.626 Runtime features: free-space-tree 00:12:21.626 Checksum: crc32c 00:12:21.626 Number of devices: 1 00:12:21.626 Devices: 00:12:21.626 ID SIZE PATH 00:12:21.626 1 510.00MiB /dev/nvme0n1p1 00:12:21.626 00:12:21.626 20:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:21.626 20:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1992992 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:22.561 00:12:22.561 real 0m1.164s 00:12:22.561 user 0m0.024s 00:12:22.561 sys 0m0.112s 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:22.561 ************************************ 00:12:22.561 END TEST filesystem_in_capsule_btrfs 00:12:22.561 ************************************ 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.561 ************************************ 00:12:22.561 START TEST filesystem_in_capsule_xfs 00:12:22.561 ************************************ 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:22.561 20:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:22.820 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:22.820 = sectsz=512 attr=2, projid32bit=1 00:12:22.820 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:22.820 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:22.820 data = bsize=4096 blocks=130560, imaxpct=25 00:12:22.820 = sunit=0 swidth=0 blks 00:12:22.820 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:22.820 log =internal log bsize=4096 blocks=16384, version=2 00:12:22.820 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:22.820 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:23.754 Discarding blocks...Done. 00:12:23.754 20:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:23.754 20:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:26.320 20:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:26.320 20:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:26.320 20:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:26.320 20:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:26.320 20:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:26.320 20:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:26.320 20:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1992992 00:12:26.320 20:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:26.320 20:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:26.320 20:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:26.320 20:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:26.320 00:12:26.320 real 0m3.643s 00:12:26.320 user 0m0.012s 00:12:26.320 sys 0m0.067s 00:12:26.320 20:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:26.320 20:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:26.320 ************************************ 00:12:26.320 END TEST filesystem_in_capsule_xfs 00:12:26.320 ************************************ 00:12:26.320 20:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:26.320 20:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:26.320 20:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.579 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.579 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:26.579 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:26.579 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.579 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:26.579 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.579 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:26.579 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.579 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.579 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.579 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.579 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:26.579 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1992992 00:12:26.579 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1992992 ']' 00:12:26.579 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1992992 00:12:26.579 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:26.579 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:26.579 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1992992 00:12:26.579 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:26.579 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:26.579 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1992992' 00:12:26.579 killing process with pid 1992992 00:12:26.579 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1992992 00:12:26.579 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1992992 00:12:27.148 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:27.148 00:12:27.148 real 0m15.406s 00:12:27.148 user 0m58.797s 00:12:27.148 sys 0m2.007s 00:12:27.148 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:27.148 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.148 ************************************ 00:12:27.148 END TEST nvmf_filesystem_in_capsule 00:12:27.148 ************************************ 00:12:27.148 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:27.148 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:27.148 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:12:27.148 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:27.148 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:12:27.148 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:27.148 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:27.148 rmmod nvme_tcp 00:12:27.148 rmmod nvme_fabrics 00:12:27.148 rmmod nvme_keyring 00:12:27.148 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:27.148 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:12:27.148 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:12:27.148 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:27.148 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:27.148 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:27.148 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:27.148 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:27.148 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:27.148 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.148 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.148 20:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.690 20:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:29.690 00:12:29.690 real 0m32.444s 00:12:29.690 user 1m44.528s 00:12:29.690 sys 0m6.092s 00:12:29.690 20:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:29.690 20:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:29.690 ************************************ 00:12:29.690 END TEST nvmf_filesystem 00:12:29.690 ************************************ 00:12:29.690 20:06:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:29.690 20:06:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:29.690 20:06:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:29.690 20:06:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:29.691 ************************************ 00:12:29.691 START TEST nvmf_target_discovery 00:12:29.691 ************************************ 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:29.691 * Looking for test storage... 00:12:29.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.691 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:12:29.692 20:06:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:32.228 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:32.228 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:32.229 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:32.229 Found net devices under 0000:84:00.0: cvl_0_0 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:32.229 Found net devices under 0000:84:00.1: cvl_0_1 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:32.229 20:06:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:32.229 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:32.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:12:32.229 00:12:32.229 --- 10.0.0.2 ping statistics --- 00:12:32.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.229 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:12:32.229 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:32.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:12:32.488 00:12:32.488 --- 10.0.0.1 ping statistics --- 00:12:32.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.488 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:12:32.488 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.488 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:12:32.488 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:32.488 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.488 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:32.488 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:32.488 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.488 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:32.488 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:32.488 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:32.488 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:32.488 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:32.488 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.488 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1996879 00:12:32.488 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1996879 00:12:32.488 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1996879 ']' 00:12:32.488 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:32.488 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.488 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:32.488 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.488 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:32.488 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.488 [2024-07-24 20:06:36.107039] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:12:32.488 [2024-07-24 20:06:36.107135] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.488 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.488 [2024-07-24 20:06:36.213097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.745 [2024-07-24 20:06:36.416577] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.745 [2024-07-24 20:06:36.416650] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.745 [2024-07-24 20:06:36.416670] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.745 [2024-07-24 20:06:36.416688] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.745 [2024-07-24 20:06:36.416703] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.745 [2024-07-24 20:06:36.416883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.745 [2024-07-24 20:06:36.416948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.745 [2024-07-24 20:06:36.417007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.745 [2024-07-24 20:06:36.417010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.002 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:33.002 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:33.002 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:33.002 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:33.002 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.002 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.002 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:33.002 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.002 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.002 [2024-07-24 20:06:36.612762] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.003 Null1 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.003 [2024-07-24 20:06:36.658132] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.003 Null2 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.003 Null3 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.003 Null4 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.003 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:12:33.261 00:12:33.261 Discovery Log Number of Records 6, Generation counter 6 00:12:33.261 =====Discovery Log Entry 0====== 00:12:33.261 trtype: tcp 00:12:33.261 adrfam: ipv4 00:12:33.261 subtype: current discovery subsystem 00:12:33.261 treq: not required 00:12:33.261 portid: 0 00:12:33.261 trsvcid: 4420 00:12:33.261 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:33.261 traddr: 10.0.0.2 00:12:33.261 eflags: explicit discovery connections, duplicate discovery information 00:12:33.261 sectype: none 00:12:33.261 =====Discovery Log Entry 1====== 00:12:33.261 trtype: tcp 00:12:33.261 adrfam: ipv4 00:12:33.261 subtype: nvme subsystem 00:12:33.261 treq: not required 00:12:33.261 portid: 0 00:12:33.261 trsvcid: 4420 00:12:33.261 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:33.261 traddr: 10.0.0.2 00:12:33.261 eflags: none 00:12:33.261 sectype: none 00:12:33.261 =====Discovery Log Entry 2====== 00:12:33.261 trtype: tcp 00:12:33.261 adrfam: ipv4 00:12:33.261 subtype: nvme subsystem 00:12:33.261 treq: not required 00:12:33.261 portid: 0 00:12:33.261 trsvcid: 4420 00:12:33.261 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:33.261 traddr: 10.0.0.2 00:12:33.261 eflags: none 00:12:33.261 sectype: none 00:12:33.262 =====Discovery Log Entry 3====== 00:12:33.262 trtype: tcp 00:12:33.262 adrfam: ipv4 00:12:33.262 subtype: nvme subsystem 00:12:33.262 treq: not required 00:12:33.262 portid: 0 00:12:33.262 trsvcid: 4420 00:12:33.262 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:33.262 traddr: 10.0.0.2 00:12:33.262 eflags: none 00:12:33.262 sectype: none 00:12:33.262 =====Discovery Log Entry 4====== 00:12:33.262 trtype: tcp 00:12:33.262 adrfam: ipv4 00:12:33.262 subtype: nvme subsystem 00:12:33.262 treq: not required 00:12:33.262 portid: 0 00:12:33.262 trsvcid: 4420 00:12:33.262 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:33.262 traddr: 10.0.0.2 00:12:33.262 eflags: none 00:12:33.262 sectype: none 00:12:33.262 =====Discovery Log Entry 5====== 00:12:33.262 trtype: tcp 00:12:33.262 adrfam: ipv4 00:12:33.262 subtype: discovery subsystem referral 00:12:33.262 treq: not required 00:12:33.262 portid: 0 00:12:33.262 trsvcid: 4430 00:12:33.262 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:33.262 traddr: 10.0.0.2 00:12:33.262 eflags: none 00:12:33.262 sectype: none 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:33.262 Perform nvmf subsystem discovery via RPC 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.262 [ 00:12:33.262 { 00:12:33.262 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:33.262 "subtype": "Discovery", 00:12:33.262 "listen_addresses": [ 00:12:33.262 { 00:12:33.262 "trtype": "TCP", 00:12:33.262 "adrfam": "IPv4", 00:12:33.262 "traddr": "10.0.0.2", 00:12:33.262 "trsvcid": "4420" 00:12:33.262 } 00:12:33.262 ], 00:12:33.262 "allow_any_host": true, 00:12:33.262 "hosts": [] 00:12:33.262 }, 00:12:33.262 { 00:12:33.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:33.262 "subtype": "NVMe", 00:12:33.262 "listen_addresses": [ 00:12:33.262 { 00:12:33.262 "trtype": "TCP", 00:12:33.262 "adrfam": "IPv4", 00:12:33.262 "traddr": "10.0.0.2", 00:12:33.262 "trsvcid": "4420" 00:12:33.262 } 00:12:33.262 ], 00:12:33.262 "allow_any_host": true, 00:12:33.262 "hosts": [], 00:12:33.262 "serial_number": "SPDK00000000000001", 00:12:33.262 "model_number": "SPDK bdev Controller", 00:12:33.262 "max_namespaces": 32, 00:12:33.262 "min_cntlid": 1, 00:12:33.262 "max_cntlid": 65519, 00:12:33.262 "namespaces": [ 00:12:33.262 { 00:12:33.262 "nsid": 1, 00:12:33.262 "bdev_name": "Null1", 00:12:33.262 "name": "Null1", 00:12:33.262 "nguid": "0AE940247D764EA39F955247927197AB", 00:12:33.262 "uuid": "0ae94024-7d76-4ea3-9f95-5247927197ab" 00:12:33.262 } 00:12:33.262 ] 00:12:33.262 }, 00:12:33.262 { 00:12:33.262 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:33.262 "subtype": "NVMe", 00:12:33.262 "listen_addresses": [ 00:12:33.262 { 00:12:33.262 "trtype": "TCP", 00:12:33.262 "adrfam": "IPv4", 00:12:33.262 "traddr": "10.0.0.2", 00:12:33.262 "trsvcid": "4420" 00:12:33.262 } 00:12:33.262 ], 00:12:33.262 "allow_any_host": true, 00:12:33.262 "hosts": [], 00:12:33.262 "serial_number": "SPDK00000000000002", 00:12:33.262 "model_number": "SPDK bdev Controller", 00:12:33.262 "max_namespaces": 32, 00:12:33.262 "min_cntlid": 1, 00:12:33.262 "max_cntlid": 65519, 00:12:33.262 "namespaces": [ 00:12:33.262 { 00:12:33.262 "nsid": 1, 00:12:33.262 "bdev_name": "Null2", 00:12:33.262 "name": "Null2", 00:12:33.262 "nguid": "A5625DB688464632A83AF3E7F1115872", 00:12:33.262 "uuid": "a5625db6-8846-4632-a83a-f3e7f1115872" 00:12:33.262 } 00:12:33.262 ] 00:12:33.262 }, 00:12:33.262 { 00:12:33.262 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:33.262 "subtype": "NVMe", 00:12:33.262 "listen_addresses": [ 00:12:33.262 { 00:12:33.262 "trtype": "TCP", 00:12:33.262 "adrfam": "IPv4", 00:12:33.262 "traddr": "10.0.0.2", 00:12:33.262 "trsvcid": "4420" 00:12:33.262 } 00:12:33.262 ], 00:12:33.262 "allow_any_host": true, 00:12:33.262 "hosts": [], 00:12:33.262 "serial_number": "SPDK00000000000003", 00:12:33.262 "model_number": "SPDK bdev Controller", 00:12:33.262 "max_namespaces": 32, 00:12:33.262 "min_cntlid": 1, 00:12:33.262 "max_cntlid": 65519, 00:12:33.262 "namespaces": [ 00:12:33.262 { 00:12:33.262 "nsid": 1, 00:12:33.262 "bdev_name": "Null3", 00:12:33.262 "name": "Null3", 00:12:33.262 "nguid": "E8D137E5950A4B8CB4B53D9A73D754DC", 00:12:33.262 "uuid": "e8d137e5-950a-4b8c-b4b5-3d9a73d754dc" 00:12:33.262 } 00:12:33.262 ] 00:12:33.262 }, 00:12:33.262 { 00:12:33.262 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:33.262 "subtype": "NVMe", 00:12:33.262 "listen_addresses": [ 00:12:33.262 { 00:12:33.262 "trtype": "TCP", 00:12:33.262 "adrfam": "IPv4", 00:12:33.262 "traddr": "10.0.0.2", 00:12:33.262 "trsvcid": "4420" 00:12:33.262 } 00:12:33.262 ], 00:12:33.262 "allow_any_host": true, 00:12:33.262 "hosts": [], 00:12:33.262 "serial_number": "SPDK00000000000004", 00:12:33.262 "model_number": "SPDK bdev Controller", 00:12:33.262 "max_namespaces": 32, 00:12:33.262 "min_cntlid": 1, 00:12:33.262 "max_cntlid": 65519, 00:12:33.262 "namespaces": [ 00:12:33.262 { 00:12:33.262 "nsid": 1, 00:12:33.262 "bdev_name": "Null4", 00:12:33.262 "name": "Null4", 00:12:33.262 "nguid": "F405B62A6E6A4B50ADAF5A9FC34141B4", 00:12:33.262 "uuid": "f405b62a-6e6a-4b50-adaf-5a9fc34141b4" 00:12:33.262 } 00:12:33.262 ] 00:12:33.262 } 00:12:33.262 ] 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.262 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.263 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:33.263 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.263 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.263 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.263 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:33.263 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:33.263 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.263 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.263 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.263 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:33.263 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:33.263 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:33.263 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:33.263 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:33.263 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:12:33.263 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:33.263 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:12:33.263 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:33.263 20:06:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:33.263 rmmod nvme_tcp 00:12:33.263 rmmod nvme_fabrics 00:12:33.263 rmmod nvme_keyring 00:12:33.521 20:06:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:33.521 20:06:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:12:33.521 20:06:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:12:33.521 20:06:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1996879 ']' 00:12:33.521 20:06:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1996879 00:12:33.521 20:06:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1996879 ']' 00:12:33.521 20:06:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1996879 00:12:33.521 20:06:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:33.521 20:06:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:33.521 20:06:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1996879 00:12:33.521 20:06:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:33.521 20:06:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:33.521 20:06:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1996879' 00:12:33.521 killing process with pid 1996879 00:12:33.521 20:06:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1996879 00:12:33.521 20:06:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1996879 00:12:33.779 20:06:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:33.779 20:06:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:33.779 20:06:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:33.779 20:06:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:33.779 20:06:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:33.779 20:06:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.779 20:06:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.779 20:06:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:36.350 00:12:36.350 real 0m6.505s 00:12:36.350 user 0m5.094s 00:12:36.350 sys 0m2.621s 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.350 ************************************ 00:12:36.350 END TEST nvmf_target_discovery 00:12:36.350 ************************************ 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:36.350 ************************************ 00:12:36.350 START TEST nvmf_referrals 00:12:36.350 ************************************ 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:36.350 * Looking for test storage... 00:12:36.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:12:36.350 20:06:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.885 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:38.886 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:38.886 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:38.886 Found net devices under 0000:84:00.0: cvl_0_0 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:38.886 Found net devices under 0000:84:00.1: cvl_0_1 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:38.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:12:38.886 00:12:38.886 --- 10.0.0.2 ping statistics --- 00:12:38.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.886 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:12:38.886 00:12:38.886 --- 10.0.0.1 ping statistics --- 00:12:38.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.886 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1999104 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1999104 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1999104 ']' 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:38.886 20:06:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.886 [2024-07-24 20:06:42.616575] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:12:38.886 [2024-07-24 20:06:42.616670] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.886 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.145 [2024-07-24 20:06:42.728621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:39.403 [2024-07-24 20:06:42.932163] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.403 [2024-07-24 20:06:42.932245] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.403 [2024-07-24 20:06:42.932264] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.403 [2024-07-24 20:06:42.932281] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.403 [2024-07-24 20:06:42.932295] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.403 [2024-07-24 20:06:42.932365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.403 [2024-07-24 20:06:42.932426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.403 [2024-07-24 20:06:42.932487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.403 [2024-07-24 20:06:42.932491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.403 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:39.403 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:39.403 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:39.403 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:39.403 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.403 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.403 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:39.403 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.403 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.403 [2024-07-24 20:06:43.120781] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.403 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.403 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:39.403 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.404 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.404 [2024-07-24 20:06:43.133995] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:39.404 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.404 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:39.404 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.404 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.404 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.404 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:39.404 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.404 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.404 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.404 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:39.404 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.404 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.404 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.404 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:39.404 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:39.404 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.404 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.404 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.662 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:39.920 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.178 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:40.178 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:40.178 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:40.178 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:40.178 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:40.178 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.178 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:40.178 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:40.178 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:40.178 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:40.178 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:40.178 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:40.178 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:40.178 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.178 20:06:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:40.436 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:40.436 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:40.436 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:40.436 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:40.436 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.436 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:40.436 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:40.436 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:40.436 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.436 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.436 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.436 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:40.436 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:40.436 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:40.436 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.436 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:40.436 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:40.436 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.436 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.694 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:40.694 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:40.694 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:40.694 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:40.694 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:40.694 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.694 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:40.694 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:40.694 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:40.694 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:40.694 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:40.694 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:40.694 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:40.694 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.694 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:40.952 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:40.952 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:40.952 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:40.952 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:40.952 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.952 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:40.952 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:40.952 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:40.952 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.952 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.952 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.952 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:40.952 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:40.952 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.952 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.952 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.952 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:40.952 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:40.952 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:40.952 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:40.952 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.952 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:40.952 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:41.210 rmmod nvme_tcp 00:12:41.210 rmmod nvme_fabrics 00:12:41.210 rmmod nvme_keyring 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1999104 ']' 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1999104 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1999104 ']' 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1999104 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1999104 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1999104' 00:12:41.210 killing process with pid 1999104 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1999104 00:12:41.210 20:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1999104 00:12:41.778 20:06:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:41.778 20:06:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:41.778 20:06:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:41.778 20:06:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:41.778 20:06:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:41.778 20:06:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.778 20:06:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.778 20:06:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.682 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:43.682 00:12:43.682 real 0m7.787s 00:12:43.682 user 0m11.231s 00:12:43.682 sys 0m2.888s 00:12:43.682 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:43.682 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.682 ************************************ 00:12:43.682 END TEST nvmf_referrals 00:12:43.682 ************************************ 00:12:43.682 20:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:43.682 20:06:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:43.682 20:06:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:43.682 20:06:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:43.940 ************************************ 00:12:43.940 START TEST nvmf_connect_disconnect 00:12:43.940 ************************************ 00:12:43.940 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:43.940 * Looking for test storage... 00:12:43.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:43.940 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:43.940 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:43.940 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.940 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.940 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.940 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.940 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.940 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:12:43.941 20:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:46.472 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:46.472 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:46.472 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:46.473 Found net devices under 0000:84:00.0: cvl_0_0 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:46.473 Found net devices under 0000:84:00.1: cvl_0_1 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:46.473 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:46.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:12:46.732 00:12:46.732 --- 10.0.0.2 ping statistics --- 00:12:46.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.732 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:46.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:12:46.732 00:12:46.732 --- 10.0.0.1 ping statistics --- 00:12:46.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.732 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2001534 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2001534 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 2001534 ']' 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:46.732 20:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:46.991 [2024-07-24 20:06:50.531707] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:12:46.991 [2024-07-24 20:06:50.531886] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.991 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.991 [2024-07-24 20:06:50.689950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:47.249 [2024-07-24 20:06:50.900724] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.249 [2024-07-24 20:06:50.900842] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.249 [2024-07-24 20:06:50.900880] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.249 [2024-07-24 20:06:50.900915] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.249 [2024-07-24 20:06:50.900941] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.249 [2024-07-24 20:06:50.901102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.249 [2024-07-24 20:06:50.901178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:47.249 [2024-07-24 20:06:50.901181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.249 [2024-07-24 20:06:50.901137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:47.508 [2024-07-24 20:06:51.100864] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:47.508 [2024-07-24 20:06:51.167782] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:47.508 20:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:50.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.687 20:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:01.687 20:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:01.687 20:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:01.687 20:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:13:01.687 20:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:01.687 20:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:13:01.687 20:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:01.687 20:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:01.687 rmmod nvme_tcp 00:13:01.687 rmmod nvme_fabrics 00:13:01.687 rmmod nvme_keyring 00:13:01.687 20:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:01.687 20:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:13:01.687 20:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:13:01.687 20:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2001534 ']' 00:13:01.687 20:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2001534 00:13:01.687 20:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2001534 ']' 00:13:01.687 20:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 2001534 00:13:01.687 20:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:13:01.687 20:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:01.687 20:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2001534 00:13:01.687 20:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:01.687 20:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:01.687 20:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2001534' 00:13:01.687 killing process with pid 2001534 00:13:01.687 20:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 2001534 00:13:01.687 20:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 2001534 00:13:01.687 20:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:01.687 20:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:01.687 20:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:01.687 20:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:01.687 20:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:01.687 20:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.687 20:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.687 20:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.219 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:04.219 00:13:04.219 real 0m19.974s 00:13:04.219 user 0m57.585s 00:13:04.219 sys 0m3.920s 00:13:04.219 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:04.219 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:04.219 ************************************ 00:13:04.219 END TEST nvmf_connect_disconnect 00:13:04.219 ************************************ 00:13:04.219 20:07:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:04.219 20:07:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:04.219 20:07:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:04.219 20:07:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:04.219 ************************************ 00:13:04.219 START TEST nvmf_multitarget 00:13:04.219 ************************************ 00:13:04.219 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:04.219 * Looking for test storage... 00:13:04.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.219 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:13:04.220 20:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:06.754 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:06.754 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:06.754 Found net devices under 0000:84:00.0: cvl_0_0 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:06.754 Found net devices under 0000:84:00.1: cvl_0_1 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:06.754 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:06.755 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:06.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:13:06.755 00:13:06.755 --- 10.0.0.2 ping statistics --- 00:13:06.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.755 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:13:06.755 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:06.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:13:06.755 00:13:06.755 --- 10.0.0.1 ping statistics --- 00:13:06.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.755 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:13:06.755 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.755 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:13:06.755 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:06.755 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.755 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:06.755 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:06.755 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.755 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:06.755 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:06.755 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:06.755 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:06.755 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:06.755 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:06.755 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2005312 00:13:06.755 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:06.755 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2005312 00:13:06.755 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 2005312 ']' 00:13:06.755 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.755 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:06.755 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.755 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:06.755 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:06.755 [2024-07-24 20:07:10.489375] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:13:06.755 [2024-07-24 20:07:10.489489] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.755 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.013 [2024-07-24 20:07:10.601305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:07.276 [2024-07-24 20:07:10.799392] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.276 [2024-07-24 20:07:10.799460] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.276 [2024-07-24 20:07:10.799480] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.276 [2024-07-24 20:07:10.799495] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.276 [2024-07-24 20:07:10.799508] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.276 [2024-07-24 20:07:10.799584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.276 [2024-07-24 20:07:10.799656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.276 [2024-07-24 20:07:10.799716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.276 [2024-07-24 20:07:10.799720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.276 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:07.276 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:13:07.276 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:07.276 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:07.276 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:07.276 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.276 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:07.276 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:07.276 20:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:07.533 20:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:07.533 20:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:07.533 "nvmf_tgt_1" 00:13:07.533 20:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:07.791 "nvmf_tgt_2" 00:13:07.791 20:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:07.792 20:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:07.792 20:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:07.792 20:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:08.050 true 00:13:08.050 20:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:08.050 true 00:13:08.050 20:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:08.050 20:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:08.308 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:08.308 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:08.308 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:08.308 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:08.308 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:13:08.308 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:08.308 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:13:08.308 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:08.309 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:08.309 rmmod nvme_tcp 00:13:08.309 rmmod nvme_fabrics 00:13:08.567 rmmod nvme_keyring 00:13:08.567 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:08.567 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:13:08.567 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:13:08.567 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2005312 ']' 00:13:08.567 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2005312 00:13:08.567 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 2005312 ']' 00:13:08.567 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 2005312 00:13:08.567 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:13:08.567 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:08.567 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2005312 00:13:08.567 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:08.567 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:08.568 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2005312' 00:13:08.568 killing process with pid 2005312 00:13:08.568 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 2005312 00:13:08.568 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 2005312 00:13:08.828 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:08.828 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:08.828 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:08.828 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:08.828 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:08.828 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.828 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:08.828 20:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:11.382 00:13:11.382 real 0m7.105s 00:13:11.382 user 0m8.673s 00:13:11.382 sys 0m2.654s 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:11.382 ************************************ 00:13:11.382 END TEST nvmf_multitarget 00:13:11.382 ************************************ 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:11.382 ************************************ 00:13:11.382 START TEST nvmf_rpc 00:13:11.382 ************************************ 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:11.382 * Looking for test storage... 00:13:11.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:13:11.382 20:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:13.916 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:13.917 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:13.917 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:13.917 Found net devices under 0000:84:00.0: cvl_0_0 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:13.917 Found net devices under 0000:84:00.1: cvl_0_1 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:13.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:13:13.917 00:13:13.917 --- 10.0.0.2 ping statistics --- 00:13:13.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.917 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:13.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:13:13.917 00:13:13.917 --- 10.0.0.1 ping statistics --- 00:13:13.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.917 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.917 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2007552 00:13:13.918 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:13.918 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2007552 00:13:13.918 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 2007552 ']' 00:13:13.918 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.918 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:13.918 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.918 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:13.918 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.918 [2024-07-24 20:07:17.525756] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:13:13.918 [2024-07-24 20:07:17.525857] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.918 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.918 [2024-07-24 20:07:17.636720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:14.177 [2024-07-24 20:07:17.840129] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.177 [2024-07-24 20:07:17.840234] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.177 [2024-07-24 20:07:17.840268] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.177 [2024-07-24 20:07:17.840310] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.177 [2024-07-24 20:07:17.840327] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.177 [2024-07-24 20:07:17.840467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.177 [2024-07-24 20:07:17.840509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.177 [2024-07-24 20:07:17.840572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.177 [2024-07-24 20:07:17.840576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.436 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:14.436 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:13:14.436 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:14.436 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:14.436 20:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:14.436 "tick_rate": 2700000000, 00:13:14.436 "poll_groups": [ 00:13:14.436 { 00:13:14.436 "name": "nvmf_tgt_poll_group_000", 00:13:14.436 "admin_qpairs": 0, 00:13:14.436 "io_qpairs": 0, 00:13:14.436 "current_admin_qpairs": 0, 00:13:14.436 "current_io_qpairs": 0, 00:13:14.436 "pending_bdev_io": 0, 00:13:14.436 "completed_nvme_io": 0, 00:13:14.436 "transports": [] 00:13:14.436 }, 00:13:14.436 { 00:13:14.436 "name": "nvmf_tgt_poll_group_001", 00:13:14.436 "admin_qpairs": 0, 00:13:14.436 "io_qpairs": 0, 00:13:14.436 "current_admin_qpairs": 0, 00:13:14.436 "current_io_qpairs": 0, 00:13:14.436 "pending_bdev_io": 0, 00:13:14.436 "completed_nvme_io": 0, 00:13:14.436 "transports": [] 00:13:14.436 }, 00:13:14.436 { 00:13:14.436 "name": "nvmf_tgt_poll_group_002", 00:13:14.436 "admin_qpairs": 0, 00:13:14.436 "io_qpairs": 0, 00:13:14.436 "current_admin_qpairs": 0, 00:13:14.436 "current_io_qpairs": 0, 00:13:14.436 "pending_bdev_io": 0, 00:13:14.436 "completed_nvme_io": 0, 00:13:14.436 "transports": [] 00:13:14.436 }, 00:13:14.436 { 00:13:14.436 "name": "nvmf_tgt_poll_group_003", 00:13:14.436 "admin_qpairs": 0, 00:13:14.436 "io_qpairs": 0, 00:13:14.436 "current_admin_qpairs": 0, 00:13:14.436 "current_io_qpairs": 0, 00:13:14.436 "pending_bdev_io": 0, 00:13:14.436 "completed_nvme_io": 0, 00:13:14.436 "transports": [] 00:13:14.436 } 00:13:14.436 ] 00:13:14.436 }' 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.436 [2024-07-24 20:07:18.163306] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:14.436 "tick_rate": 2700000000, 00:13:14.436 "poll_groups": [ 00:13:14.436 { 00:13:14.436 "name": "nvmf_tgt_poll_group_000", 00:13:14.436 "admin_qpairs": 0, 00:13:14.436 "io_qpairs": 0, 00:13:14.436 "current_admin_qpairs": 0, 00:13:14.436 "current_io_qpairs": 0, 00:13:14.436 "pending_bdev_io": 0, 00:13:14.436 "completed_nvme_io": 0, 00:13:14.436 "transports": [ 00:13:14.436 { 00:13:14.436 "trtype": "TCP" 00:13:14.436 } 00:13:14.436 ] 00:13:14.436 }, 00:13:14.436 { 00:13:14.436 "name": "nvmf_tgt_poll_group_001", 00:13:14.436 "admin_qpairs": 0, 00:13:14.436 "io_qpairs": 0, 00:13:14.436 "current_admin_qpairs": 0, 00:13:14.436 "current_io_qpairs": 0, 00:13:14.436 "pending_bdev_io": 0, 00:13:14.436 "completed_nvme_io": 0, 00:13:14.436 "transports": [ 00:13:14.436 { 00:13:14.436 "trtype": "TCP" 00:13:14.436 } 00:13:14.436 ] 00:13:14.436 }, 00:13:14.436 { 00:13:14.436 "name": "nvmf_tgt_poll_group_002", 00:13:14.436 "admin_qpairs": 0, 00:13:14.436 "io_qpairs": 0, 00:13:14.436 "current_admin_qpairs": 0, 00:13:14.436 "current_io_qpairs": 0, 00:13:14.436 "pending_bdev_io": 0, 00:13:14.436 "completed_nvme_io": 0, 00:13:14.436 "transports": [ 00:13:14.436 { 00:13:14.436 "trtype": "TCP" 00:13:14.436 } 00:13:14.436 ] 00:13:14.436 }, 00:13:14.436 { 00:13:14.436 "name": "nvmf_tgt_poll_group_003", 00:13:14.436 "admin_qpairs": 0, 00:13:14.436 "io_qpairs": 0, 00:13:14.436 "current_admin_qpairs": 0, 00:13:14.436 "current_io_qpairs": 0, 00:13:14.436 "pending_bdev_io": 0, 00:13:14.436 "completed_nvme_io": 0, 00:13:14.436 "transports": [ 00:13:14.436 { 00:13:14.436 "trtype": "TCP" 00:13:14.436 } 00:13:14.436 ] 00:13:14.436 } 00:13:14.436 ] 00:13:14.436 }' 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:14.436 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.695 Malloc1 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.695 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.696 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.696 [2024-07-24 20:07:18.334345] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.696 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.696 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:13:14.696 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:14.696 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:13:14.696 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:14.696 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:14.696 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:14.696 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:14.696 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:14.696 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:14.696 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:14.696 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:14.696 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:13:14.696 [2024-07-24 20:07:18.356857] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:13:14.696 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:14.696 could not add new controller: failed to write to nvme-fabrics device 00:13:14.696 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:14.696 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:14.696 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:14.696 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:14.696 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:14.696 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.696 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.696 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.696 20:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:15.262 20:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:15.262 20:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:15.262 20:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:15.262 20:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:15.262 20:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:17.838 [2024-07-24 20:07:21.187299] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:13:17.838 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:17.838 could not add new controller: failed to write to nvme-fabrics device 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.838 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:18.097 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:18.097 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:18.097 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:18.097 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:18.097 20:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:20.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.626 [2024-07-24 20:07:23.954090] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.626 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.627 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.627 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:20.627 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.627 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.627 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.627 20:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:20.885 20:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:20.885 20:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:20.885 20:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:20.885 20:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:20.885 20:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:23.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.415 [2024-07-24 20:07:26.749578] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.415 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:23.416 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.416 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.416 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.416 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.416 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.416 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.416 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.416 20:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:23.674 20:07:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:23.674 20:07:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:23.674 20:07:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:23.674 20:07:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:23.674 20:07:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:26.204 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:26.204 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:26.204 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:26.204 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:26.204 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:26.204 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:26.204 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:26.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.204 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:26.204 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:26.204 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:26.204 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.204 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:26.204 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.204 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:26.204 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:26.204 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.204 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.204 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.204 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.204 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.204 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.204 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.204 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:26.205 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:26.205 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.205 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.205 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.205 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.205 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.205 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.205 [2024-07-24 20:07:29.503479] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.205 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.205 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:26.205 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.205 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.205 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.205 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:26.205 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.205 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.205 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.205 20:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:26.463 20:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:26.463 20:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:26.463 20:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:26.463 20:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:26.463 20:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:28.362 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:28.362 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:28.362 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:28.362 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:28.362 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:28.362 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:28.362 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:28.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.621 [2024-07-24 20:07:32.258114] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.621 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:29.187 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:29.187 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:29.187 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:29.187 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:29.187 20:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:31.714 20:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:31.714 20:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:31.714 20:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:31.714 20:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:31.714 20:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:31.714 20:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:31.714 20:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:31.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.714 [2024-07-24 20:07:35.103221] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.714 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:31.972 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:31.972 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:31.972 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:31.972 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:31.972 20:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:34.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.501 [2024-07-24 20:07:37.892911] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.501 [2024-07-24 20:07:37.940959] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.501 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.502 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.502 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.502 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 [2024-07-24 20:07:37.989120] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.502 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.502 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:34.502 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.502 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 20:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 [2024-07-24 20:07:38.037287] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 [2024-07-24 20:07:38.085462] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:34.502 "tick_rate": 2700000000, 00:13:34.502 "poll_groups": [ 00:13:34.502 { 00:13:34.502 "name": "nvmf_tgt_poll_group_000", 00:13:34.502 "admin_qpairs": 2, 00:13:34.502 "io_qpairs": 84, 00:13:34.502 "current_admin_qpairs": 0, 00:13:34.502 "current_io_qpairs": 0, 00:13:34.502 "pending_bdev_io": 0, 00:13:34.502 "completed_nvme_io": 220, 00:13:34.502 "transports": [ 00:13:34.502 { 00:13:34.502 "trtype": "TCP" 00:13:34.502 } 00:13:34.502 ] 00:13:34.502 }, 00:13:34.502 { 00:13:34.502 "name": "nvmf_tgt_poll_group_001", 00:13:34.502 "admin_qpairs": 2, 00:13:34.502 "io_qpairs": 84, 00:13:34.502 "current_admin_qpairs": 0, 00:13:34.502 "current_io_qpairs": 0, 00:13:34.502 "pending_bdev_io": 0, 00:13:34.502 "completed_nvme_io": 153, 00:13:34.502 "transports": [ 00:13:34.502 { 00:13:34.502 "trtype": "TCP" 00:13:34.502 } 00:13:34.502 ] 00:13:34.502 }, 00:13:34.502 { 00:13:34.502 "name": "nvmf_tgt_poll_group_002", 00:13:34.502 "admin_qpairs": 1, 00:13:34.502 "io_qpairs": 84, 00:13:34.502 "current_admin_qpairs": 0, 00:13:34.502 "current_io_qpairs": 0, 00:13:34.502 "pending_bdev_io": 0, 00:13:34.502 "completed_nvme_io": 114, 00:13:34.502 "transports": [ 00:13:34.502 { 00:13:34.502 "trtype": "TCP" 00:13:34.502 } 00:13:34.502 ] 00:13:34.502 }, 00:13:34.502 { 00:13:34.502 "name": "nvmf_tgt_poll_group_003", 00:13:34.502 "admin_qpairs": 2, 00:13:34.502 "io_qpairs": 84, 00:13:34.502 "current_admin_qpairs": 0, 00:13:34.502 "current_io_qpairs": 0, 00:13:34.502 "pending_bdev_io": 0, 00:13:34.502 "completed_nvme_io": 199, 00:13:34.502 "transports": [ 00:13:34.502 { 00:13:34.502 "trtype": "TCP" 00:13:34.502 } 00:13:34.502 ] 00:13:34.502 } 00:13:34.502 ] 00:13:34.502 }' 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:34.502 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:34.503 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:34.503 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:34.503 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:34.503 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:34.503 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:34.503 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:34.503 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:34.503 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:34.503 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:34.503 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:34.503 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:34.503 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:34.503 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:34.503 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:34.503 rmmod nvme_tcp 00:13:34.503 rmmod nvme_fabrics 00:13:34.503 rmmod nvme_keyring 00:13:34.503 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:34.503 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:34.503 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:34.503 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2007552 ']' 00:13:34.503 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2007552 00:13:34.503 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 2007552 ']' 00:13:34.503 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 2007552 00:13:34.503 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:13:34.761 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:34.761 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2007552 00:13:34.761 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:34.761 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:34.761 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2007552' 00:13:34.761 killing process with pid 2007552 00:13:34.761 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 2007552 00:13:34.761 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 2007552 00:13:35.019 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:35.019 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:35.019 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:35.019 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:35.019 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:35.019 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.019 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.019 20:07:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:37.584 00:13:37.584 real 0m26.152s 00:13:37.584 user 1m23.075s 00:13:37.584 sys 0m4.523s 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.584 ************************************ 00:13:37.584 END TEST nvmf_rpc 00:13:37.584 ************************************ 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:37.584 ************************************ 00:13:37.584 START TEST nvmf_invalid 00:13:37.584 ************************************ 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:37.584 * Looking for test storage... 00:13:37.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.584 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:37.585 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:37.585 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:37.585 20:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:40.119 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:40.119 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:40.119 Found net devices under 0000:84:00.0: cvl_0_0 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:40.119 Found net devices under 0000:84:00.1: cvl_0_1 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:40.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:13:40.119 00:13:40.119 --- 10.0.0.2 ping statistics --- 00:13:40.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.119 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:13:40.119 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:40.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:13:40.120 00:13:40.120 --- 10.0.0.1 ping statistics --- 00:13:40.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.120 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:13:40.120 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.120 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:40.120 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:40.120 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.120 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:40.120 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:40.120 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.120 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:40.120 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:40.120 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:40.120 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:40.120 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:40.120 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:40.120 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2012052 00:13:40.120 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:40.120 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2012052 00:13:40.120 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 2012052 ']' 00:13:40.120 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.120 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:40.120 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.120 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:40.120 20:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:40.379 [2024-07-24 20:07:43.909401] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:13:40.379 [2024-07-24 20:07:43.909557] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.379 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.379 [2024-07-24 20:07:44.058634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.637 [2024-07-24 20:07:44.261959] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.637 [2024-07-24 20:07:44.262065] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.637 [2024-07-24 20:07:44.262101] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.637 [2024-07-24 20:07:44.262129] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.637 [2024-07-24 20:07:44.262156] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.637 [2024-07-24 20:07:44.262279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.637 [2024-07-24 20:07:44.262342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.637 [2024-07-24 20:07:44.262401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.637 [2024-07-24 20:07:44.262405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.637 20:07:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:40.637 20:07:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:40.637 20:07:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:40.637 20:07:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:40.637 20:07:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:40.895 20:07:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.895 20:07:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:40.895 20:07:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17291 00:13:41.153 [2024-07-24 20:07:44.720648] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:41.153 20:07:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:41.153 { 00:13:41.153 "nqn": "nqn.2016-06.io.spdk:cnode17291", 00:13:41.153 "tgt_name": "foobar", 00:13:41.153 "method": "nvmf_create_subsystem", 00:13:41.153 "req_id": 1 00:13:41.153 } 00:13:41.153 Got JSON-RPC error response 00:13:41.153 response: 00:13:41.153 { 00:13:41.153 "code": -32603, 00:13:41.153 "message": "Unable to find target foobar" 00:13:41.153 }' 00:13:41.153 20:07:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:41.153 { 00:13:41.153 "nqn": "nqn.2016-06.io.spdk:cnode17291", 00:13:41.153 "tgt_name": "foobar", 00:13:41.153 "method": "nvmf_create_subsystem", 00:13:41.153 "req_id": 1 00:13:41.153 } 00:13:41.153 Got JSON-RPC error response 00:13:41.153 response: 00:13:41.153 { 00:13:41.153 "code": -32603, 00:13:41.153 "message": "Unable to find target foobar" 00:13:41.153 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:41.153 20:07:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:41.153 20:07:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode26441 00:13:41.411 [2024-07-24 20:07:45.069927] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26441: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:41.411 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:41.411 { 00:13:41.411 "nqn": "nqn.2016-06.io.spdk:cnode26441", 00:13:41.411 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:41.411 "method": "nvmf_create_subsystem", 00:13:41.411 "req_id": 1 00:13:41.411 } 00:13:41.411 Got JSON-RPC error response 00:13:41.411 response: 00:13:41.411 { 00:13:41.411 "code": -32602, 00:13:41.411 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:41.411 }' 00:13:41.411 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:41.411 { 00:13:41.411 "nqn": "nqn.2016-06.io.spdk:cnode26441", 00:13:41.411 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:41.411 "method": "nvmf_create_subsystem", 00:13:41.411 "req_id": 1 00:13:41.411 } 00:13:41.411 Got JSON-RPC error response 00:13:41.411 response: 00:13:41.411 { 00:13:41.411 "code": -32602, 00:13:41.411 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:41.411 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:41.411 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:41.411 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2667 00:13:41.669 [2024-07-24 20:07:45.435239] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2667: invalid model number 'SPDK_Controller' 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:41.927 { 00:13:41.927 "nqn": "nqn.2016-06.io.spdk:cnode2667", 00:13:41.927 "model_number": "SPDK_Controller\u001f", 00:13:41.927 "method": "nvmf_create_subsystem", 00:13:41.927 "req_id": 1 00:13:41.927 } 00:13:41.927 Got JSON-RPC error response 00:13:41.927 response: 00:13:41.927 { 00:13:41.927 "code": -32602, 00:13:41.927 "message": "Invalid MN SPDK_Controller\u001f" 00:13:41.927 }' 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:41.927 { 00:13:41.927 "nqn": "nqn.2016-06.io.spdk:cnode2667", 00:13:41.927 "model_number": "SPDK_Controller\u001f", 00:13:41.927 "method": "nvmf_create_subsystem", 00:13:41.927 "req_id": 1 00:13:41.927 } 00:13:41.927 Got JSON-RPC error response 00:13:41.927 response: 00:13:41.927 { 00:13:41.927 "code": -32602, 00:13:41.927 "message": "Invalid MN SPDK_Controller\u001f" 00:13:41.927 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.927 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ == \- ]] 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ' oR:|D4z45e[AO%9#_FB-' 00:13:41.928 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ' oR:|D4z45e[AO%9#_FB-' nqn.2016-06.io.spdk:cnode3539 00:13:42.186 [2024-07-24 20:07:45.896833] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3539: invalid serial number ' oR:|D4z45e[AO%9#_FB-' 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:42.186 { 00:13:42.186 "nqn": "nqn.2016-06.io.spdk:cnode3539", 00:13:42.186 "serial_number": " oR:|D4z45e[AO%9#_FB-", 00:13:42.186 "method": "nvmf_create_subsystem", 00:13:42.186 "req_id": 1 00:13:42.186 } 00:13:42.186 Got JSON-RPC error response 00:13:42.186 response: 00:13:42.186 { 00:13:42.186 "code": -32602, 00:13:42.186 "message": "Invalid SN oR:|D4z45e[AO%9#_FB-" 00:13:42.186 }' 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:42.186 { 00:13:42.186 "nqn": "nqn.2016-06.io.spdk:cnode3539", 00:13:42.186 "serial_number": " oR:|D4z45e[AO%9#_FB-", 00:13:42.186 "method": "nvmf_create_subsystem", 00:13:42.186 "req_id": 1 00:13:42.186 } 00:13:42.186 Got JSON-RPC error response 00:13:42.186 response: 00:13:42.186 { 00:13:42.186 "code": -32602, 00:13:42.186 "message": "Invalid SN oR:|D4z45e[AO%9#_FB-" 00:13:42.186 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:42.186 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.444 20:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:42.444 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:42.444 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:42.444 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.444 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.444 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:42.445 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:42.446 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:42.446 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:42.446 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:42.446 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ n == \- ]] 00:13:42.446 20:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'n`)>v#9#0`z0>>> JK)q>e@Fv#9#0`z0>>> JK)q>e@Fv#9#0`z0>>> JK)q>e@Fv#9#0`z0>>> JK)q>e@Fv#9#0`z0>>> JK)q>e@F /dev/null' 00:13:46.823 20:07:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.722 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:48.722 00:13:48.722 real 0m11.604s 00:13:48.722 user 0m29.839s 00:13:48.722 sys 0m3.441s 00:13:48.722 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:48.722 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:48.722 ************************************ 00:13:48.722 END TEST nvmf_invalid 00:13:48.722 ************************************ 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:48.981 ************************************ 00:13:48.981 START TEST nvmf_connect_stress 00:13:48.981 ************************************ 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:48.981 * Looking for test storage... 00:13:48.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:48.981 20:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.266 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:52.266 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:52.266 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:52.266 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:52.266 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:52.266 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:52.266 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:52.266 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:52.266 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:52.266 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:52.266 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:52.267 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:52.267 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:52.267 Found net devices under 0000:84:00.0: cvl_0_0 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:52.267 Found net devices under 0000:84:00.1: cvl_0_1 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:52.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:13:52.267 00:13:52.267 --- 10.0.0.2 ping statistics --- 00:13:52.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.267 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:52.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:13:52.267 00:13:52.267 --- 10.0.0.1 ping statistics --- 00:13:52.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.267 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:52.267 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.268 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2014976 00:13:52.268 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2014976 00:13:52.268 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:52.268 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 2014976 ']' 00:13:52.268 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.268 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:52.268 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.268 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:52.268 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.268 [2024-07-24 20:07:55.578845] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:13:52.268 [2024-07-24 20:07:55.578949] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.268 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.268 [2024-07-24 20:07:55.672301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:52.268 [2024-07-24 20:07:55.811359] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.268 [2024-07-24 20:07:55.811446] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.268 [2024-07-24 20:07:55.811492] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.268 [2024-07-24 20:07:55.811518] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.268 [2024-07-24 20:07:55.811542] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.268 [2024-07-24 20:07:55.811661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.268 [2024-07-24 20:07:55.811725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:52.268 [2024-07-24 20:07:55.811745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.268 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:52.268 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:52.268 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:52.268 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:52.268 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.268 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.268 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:52.268 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.268 20:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.268 [2024-07-24 20:07:55.989493] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.268 [2024-07-24 20:07:56.020324] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.268 NULL1 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2015116 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.268 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.526 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.526 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.784 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.784 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:13:52.784 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.784 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.784 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.042 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.042 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:13:53.042 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.042 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.042 20:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.300 20:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.300 20:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:13:53.300 20:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.300 20:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.300 20:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.878 20:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.878 20:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:13:53.878 20:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.878 20:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.878 20:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.157 20:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.157 20:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:13:54.157 20:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.157 20:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.157 20:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.415 20:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.415 20:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:13:54.415 20:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.415 20:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.415 20:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.672 20:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.672 20:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:13:54.672 20:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.672 20:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.672 20:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.930 20:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.930 20:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:13:54.930 20:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.930 20:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.930 20:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.495 20:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.495 20:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:13:55.495 20:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.495 20:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.495 20:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.752 20:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.752 20:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:13:55.752 20:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.752 20:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.752 20:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.010 20:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.010 20:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:13:56.010 20:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.010 20:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.010 20:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.268 20:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.268 20:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:13:56.268 20:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.268 20:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.268 20:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.526 20:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.526 20:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:13:56.526 20:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.526 20:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.526 20:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.092 20:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.092 20:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:13:57.092 20:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.092 20:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.092 20:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.350 20:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.350 20:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:13:57.350 20:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.350 20:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.350 20:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.608 20:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.608 20:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:13:57.608 20:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.608 20:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.608 20:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.866 20:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.866 20:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:13:57.866 20:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.866 20:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.866 20:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.123 20:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.123 20:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:13:58.123 20:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.123 20:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.123 20:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.689 20:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.689 20:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:13:58.689 20:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.689 20:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.689 20:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.946 20:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.946 20:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:13:58.946 20:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.946 20:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.946 20:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.204 20:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.204 20:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:13:59.204 20:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.204 20:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.204 20:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.461 20:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.461 20:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:13:59.461 20:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.461 20:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.461 20:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.719 20:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.719 20:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:13:59.719 20:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.719 20:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.719 20:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.285 20:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.285 20:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:14:00.285 20:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.285 20:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.285 20:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.542 20:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.542 20:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:14:00.542 20:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.542 20:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.542 20:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.799 20:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.799 20:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:14:00.799 20:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.799 20:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.799 20:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.056 20:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.056 20:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:14:01.056 20:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.056 20:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.056 20:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.314 20:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.314 20:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:14:01.314 20:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.314 20:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.314 20:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.879 20:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.879 20:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:14:01.879 20:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.879 20:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.879 20:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.136 20:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.136 20:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:14:02.136 20:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.137 20:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.137 20:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.394 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.394 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:14:02.394 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.394 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.394 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.394 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:02.652 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.652 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2015116 00:14:02.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2015116) - No such process 00:14:02.652 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2015116 00:14:02.652 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:02.652 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:02.652 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:02.652 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:02.652 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:02.652 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:02.652 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:02.652 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:02.652 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:02.652 rmmod nvme_tcp 00:14:02.652 rmmod nvme_fabrics 00:14:02.652 rmmod nvme_keyring 00:14:02.910 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:02.910 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:02.910 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:02.910 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2014976 ']' 00:14:02.910 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2014976 00:14:02.910 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 2014976 ']' 00:14:02.910 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 2014976 00:14:02.910 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:14:02.910 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:02.910 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2014976 00:14:02.910 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:02.910 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:02.911 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2014976' 00:14:02.911 killing process with pid 2014976 00:14:02.911 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 2014976 00:14:02.911 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 2014976 00:14:03.168 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:03.168 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:03.168 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:03.168 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:03.168 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:03.168 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.168 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:03.168 20:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.699 20:08:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:05.699 00:14:05.699 real 0m16.324s 00:14:05.699 user 0m38.991s 00:14:05.699 sys 0m6.716s 00:14:05.699 20:08:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:05.699 20:08:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.699 ************************************ 00:14:05.699 END TEST nvmf_connect_stress 00:14:05.699 ************************************ 00:14:05.699 20:08:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:05.699 20:08:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:05.699 20:08:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:05.699 20:08:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:05.699 ************************************ 00:14:05.699 START TEST nvmf_fused_ordering 00:14:05.699 ************************************ 00:14:05.699 20:08:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:05.699 * Looking for test storage... 00:14:05.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.699 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:05.700 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.700 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:05.700 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:05.700 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:05.700 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:05.700 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.700 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.700 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:05.700 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:05.700 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:05.700 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:05.700 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:05.700 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.700 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:05.700 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:05.700 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:05.700 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.700 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:05.700 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.700 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:05.700 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:05.700 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:05.700 20:08:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:08.229 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:08.229 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:08.229 Found net devices under 0000:84:00.0: cvl_0_0 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:08.229 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:08.230 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:08.230 Found net devices under 0000:84:00.1: cvl_0_1 00:14:08.230 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:08.230 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:08.230 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:08.230 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:08.230 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:08.230 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:08.230 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:08.230 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:08.230 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:08.230 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:08.230 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:08.230 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:08.230 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:08.230 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:08.230 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:08.230 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:08.230 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:08.230 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:08.230 20:08:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:08.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:08.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:14:08.489 00:14:08.489 --- 10.0.0.2 ping statistics --- 00:14:08.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.489 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:08.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:08.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:14:08.489 00:14:08.489 --- 10.0.0.1 ping statistics --- 00:14:08.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.489 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2018400 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2018400 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 2018400 ']' 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:08.489 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.490 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:08.490 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.490 [2024-07-24 20:08:12.239935] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:14:08.490 [2024-07-24 20:08:12.240107] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.748 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.748 [2024-07-24 20:08:12.361693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.748 [2024-07-24 20:08:12.506107] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.748 [2024-07-24 20:08:12.506179] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.748 [2024-07-24 20:08:12.506198] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.748 [2024-07-24 20:08:12.506215] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.748 [2024-07-24 20:08:12.506229] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.748 [2024-07-24 20:08:12.506265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.006 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:09.006 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:14:09.006 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:09.006 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:09.006 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:09.006 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.006 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:09.006 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.006 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:09.006 [2024-07-24 20:08:12.787270] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.264 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.264 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:09.264 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.264 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:09.264 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.264 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:09.264 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.264 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:09.264 [2024-07-24 20:08:12.803540] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.264 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.264 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:09.264 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.264 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:09.264 NULL1 00:14:09.264 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.264 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:09.264 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.264 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:09.264 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.264 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:09.264 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.264 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:09.264 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.264 20:08:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:09.264 [2024-07-24 20:08:12.853233] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:14:09.264 [2024-07-24 20:08:12.853285] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2018506 ] 00:14:09.264 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.830 Attached to nqn.2016-06.io.spdk:cnode1 00:14:09.830 Namespace ID: 1 size: 1GB 00:14:09.830 fused_ordering(0) 00:14:09.830 fused_ordering(1) 00:14:09.830 fused_ordering(2) 00:14:09.830 fused_ordering(3) 00:14:09.830 fused_ordering(4) 00:14:09.830 fused_ordering(5) 00:14:09.830 fused_ordering(6) 00:14:09.830 fused_ordering(7) 00:14:09.830 fused_ordering(8) 00:14:09.830 fused_ordering(9) 00:14:09.830 fused_ordering(10) 00:14:09.830 fused_ordering(11) 00:14:09.830 fused_ordering(12) 00:14:09.830 fused_ordering(13) 00:14:09.830 fused_ordering(14) 00:14:09.830 fused_ordering(15) 00:14:09.830 fused_ordering(16) 00:14:09.830 fused_ordering(17) 00:14:09.830 fused_ordering(18) 00:14:09.830 fused_ordering(19) 00:14:09.830 fused_ordering(20) 00:14:09.830 fused_ordering(21) 00:14:09.830 fused_ordering(22) 00:14:09.830 fused_ordering(23) 00:14:09.830 fused_ordering(24) 00:14:09.830 fused_ordering(25) 00:14:09.830 fused_ordering(26) 00:14:09.830 fused_ordering(27) 00:14:09.830 fused_ordering(28) 00:14:09.830 fused_ordering(29) 00:14:09.830 fused_ordering(30) 00:14:09.830 fused_ordering(31) 00:14:09.830 fused_ordering(32) 00:14:09.830 fused_ordering(33) 00:14:09.830 fused_ordering(34) 00:14:09.830 fused_ordering(35) 00:14:09.830 fused_ordering(36) 00:14:09.830 fused_ordering(37) 00:14:09.830 fused_ordering(38) 00:14:09.830 fused_ordering(39) 00:14:09.830 fused_ordering(40) 00:14:09.830 fused_ordering(41) 00:14:09.830 fused_ordering(42) 00:14:09.830 fused_ordering(43) 00:14:09.830 fused_ordering(44) 00:14:09.830 fused_ordering(45) 00:14:09.830 fused_ordering(46) 00:14:09.830 fused_ordering(47) 00:14:09.830 fused_ordering(48) 00:14:09.830 fused_ordering(49) 00:14:09.830 fused_ordering(50) 00:14:09.830 fused_ordering(51) 00:14:09.830 fused_ordering(52) 00:14:09.830 fused_ordering(53) 00:14:09.830 fused_ordering(54) 00:14:09.830 fused_ordering(55) 00:14:09.830 fused_ordering(56) 00:14:09.830 fused_ordering(57) 00:14:09.830 fused_ordering(58) 00:14:09.830 fused_ordering(59) 00:14:09.830 fused_ordering(60) 00:14:09.830 fused_ordering(61) 00:14:09.830 fused_ordering(62) 00:14:09.830 fused_ordering(63) 00:14:09.830 fused_ordering(64) 00:14:09.830 fused_ordering(65) 00:14:09.830 fused_ordering(66) 00:14:09.830 fused_ordering(67) 00:14:09.830 fused_ordering(68) 00:14:09.830 fused_ordering(69) 00:14:09.830 fused_ordering(70) 00:14:09.830 fused_ordering(71) 00:14:09.830 fused_ordering(72) 00:14:09.830 fused_ordering(73) 00:14:09.830 fused_ordering(74) 00:14:09.830 fused_ordering(75) 00:14:09.830 fused_ordering(76) 00:14:09.830 fused_ordering(77) 00:14:09.830 fused_ordering(78) 00:14:09.830 fused_ordering(79) 00:14:09.830 fused_ordering(80) 00:14:09.830 fused_ordering(81) 00:14:09.830 fused_ordering(82) 00:14:09.830 fused_ordering(83) 00:14:09.830 fused_ordering(84) 00:14:09.830 fused_ordering(85) 00:14:09.830 fused_ordering(86) 00:14:09.830 fused_ordering(87) 00:14:09.830 fused_ordering(88) 00:14:09.830 fused_ordering(89) 00:14:09.830 fused_ordering(90) 00:14:09.830 fused_ordering(91) 00:14:09.830 fused_ordering(92) 00:14:09.830 fused_ordering(93) 00:14:09.830 fused_ordering(94) 00:14:09.830 fused_ordering(95) 00:14:09.830 fused_ordering(96) 00:14:09.830 fused_ordering(97) 00:14:09.830 fused_ordering(98) 00:14:09.830 fused_ordering(99) 00:14:09.830 fused_ordering(100) 00:14:09.831 fused_ordering(101) 00:14:09.831 fused_ordering(102) 00:14:09.831 fused_ordering(103) 00:14:09.831 fused_ordering(104) 00:14:09.831 fused_ordering(105) 00:14:09.831 fused_ordering(106) 00:14:09.831 fused_ordering(107) 00:14:09.831 fused_ordering(108) 00:14:09.831 fused_ordering(109) 00:14:09.831 fused_ordering(110) 00:14:09.831 fused_ordering(111) 00:14:09.831 fused_ordering(112) 00:14:09.831 fused_ordering(113) 00:14:09.831 fused_ordering(114) 00:14:09.831 fused_ordering(115) 00:14:09.831 fused_ordering(116) 00:14:09.831 fused_ordering(117) 00:14:09.831 fused_ordering(118) 00:14:09.831 fused_ordering(119) 00:14:09.831 fused_ordering(120) 00:14:09.831 fused_ordering(121) 00:14:09.831 fused_ordering(122) 00:14:09.831 fused_ordering(123) 00:14:09.831 fused_ordering(124) 00:14:09.831 fused_ordering(125) 00:14:09.831 fused_ordering(126) 00:14:09.831 fused_ordering(127) 00:14:09.831 fused_ordering(128) 00:14:09.831 fused_ordering(129) 00:14:09.831 fused_ordering(130) 00:14:09.831 fused_ordering(131) 00:14:09.831 fused_ordering(132) 00:14:09.831 fused_ordering(133) 00:14:09.831 fused_ordering(134) 00:14:09.831 fused_ordering(135) 00:14:09.831 fused_ordering(136) 00:14:09.831 fused_ordering(137) 00:14:09.831 fused_ordering(138) 00:14:09.831 fused_ordering(139) 00:14:09.831 fused_ordering(140) 00:14:09.831 fused_ordering(141) 00:14:09.831 fused_ordering(142) 00:14:09.831 fused_ordering(143) 00:14:09.831 fused_ordering(144) 00:14:09.831 fused_ordering(145) 00:14:09.831 fused_ordering(146) 00:14:09.831 fused_ordering(147) 00:14:09.831 fused_ordering(148) 00:14:09.831 fused_ordering(149) 00:14:09.831 fused_ordering(150) 00:14:09.831 fused_ordering(151) 00:14:09.831 fused_ordering(152) 00:14:09.831 fused_ordering(153) 00:14:09.831 fused_ordering(154) 00:14:09.831 fused_ordering(155) 00:14:09.831 fused_ordering(156) 00:14:09.831 fused_ordering(157) 00:14:09.831 fused_ordering(158) 00:14:09.831 fused_ordering(159) 00:14:09.831 fused_ordering(160) 00:14:09.831 fused_ordering(161) 00:14:09.831 fused_ordering(162) 00:14:09.831 fused_ordering(163) 00:14:09.831 fused_ordering(164) 00:14:09.831 fused_ordering(165) 00:14:09.831 fused_ordering(166) 00:14:09.831 fused_ordering(167) 00:14:09.831 fused_ordering(168) 00:14:09.831 fused_ordering(169) 00:14:09.831 fused_ordering(170) 00:14:09.831 fused_ordering(171) 00:14:09.831 fused_ordering(172) 00:14:09.831 fused_ordering(173) 00:14:09.831 fused_ordering(174) 00:14:09.831 fused_ordering(175) 00:14:09.831 fused_ordering(176) 00:14:09.831 fused_ordering(177) 00:14:09.831 fused_ordering(178) 00:14:09.831 fused_ordering(179) 00:14:09.831 fused_ordering(180) 00:14:09.831 fused_ordering(181) 00:14:09.831 fused_ordering(182) 00:14:09.831 fused_ordering(183) 00:14:09.831 fused_ordering(184) 00:14:09.831 fused_ordering(185) 00:14:09.831 fused_ordering(186) 00:14:09.831 fused_ordering(187) 00:14:09.831 fused_ordering(188) 00:14:09.831 fused_ordering(189) 00:14:09.831 fused_ordering(190) 00:14:09.831 fused_ordering(191) 00:14:09.831 fused_ordering(192) 00:14:09.831 fused_ordering(193) 00:14:09.831 fused_ordering(194) 00:14:09.831 fused_ordering(195) 00:14:09.831 fused_ordering(196) 00:14:09.831 fused_ordering(197) 00:14:09.831 fused_ordering(198) 00:14:09.831 fused_ordering(199) 00:14:09.831 fused_ordering(200) 00:14:09.831 fused_ordering(201) 00:14:09.831 fused_ordering(202) 00:14:09.831 fused_ordering(203) 00:14:09.831 fused_ordering(204) 00:14:09.831 fused_ordering(205) 00:14:10.447 fused_ordering(206) 00:14:10.447 fused_ordering(207) 00:14:10.447 fused_ordering(208) 00:14:10.447 fused_ordering(209) 00:14:10.447 fused_ordering(210) 00:14:10.447 fused_ordering(211) 00:14:10.447 fused_ordering(212) 00:14:10.447 fused_ordering(213) 00:14:10.447 fused_ordering(214) 00:14:10.447 fused_ordering(215) 00:14:10.447 fused_ordering(216) 00:14:10.447 fused_ordering(217) 00:14:10.447 fused_ordering(218) 00:14:10.447 fused_ordering(219) 00:14:10.447 fused_ordering(220) 00:14:10.447 fused_ordering(221) 00:14:10.447 fused_ordering(222) 00:14:10.447 fused_ordering(223) 00:14:10.447 fused_ordering(224) 00:14:10.447 fused_ordering(225) 00:14:10.447 fused_ordering(226) 00:14:10.447 fused_ordering(227) 00:14:10.447 fused_ordering(228) 00:14:10.447 fused_ordering(229) 00:14:10.447 fused_ordering(230) 00:14:10.447 fused_ordering(231) 00:14:10.447 fused_ordering(232) 00:14:10.447 fused_ordering(233) 00:14:10.447 fused_ordering(234) 00:14:10.447 fused_ordering(235) 00:14:10.447 fused_ordering(236) 00:14:10.447 fused_ordering(237) 00:14:10.447 fused_ordering(238) 00:14:10.447 fused_ordering(239) 00:14:10.447 fused_ordering(240) 00:14:10.447 fused_ordering(241) 00:14:10.447 fused_ordering(242) 00:14:10.447 fused_ordering(243) 00:14:10.447 fused_ordering(244) 00:14:10.447 fused_ordering(245) 00:14:10.447 fused_ordering(246) 00:14:10.447 fused_ordering(247) 00:14:10.447 fused_ordering(248) 00:14:10.447 fused_ordering(249) 00:14:10.447 fused_ordering(250) 00:14:10.447 fused_ordering(251) 00:14:10.447 fused_ordering(252) 00:14:10.447 fused_ordering(253) 00:14:10.447 fused_ordering(254) 00:14:10.447 fused_ordering(255) 00:14:10.447 fused_ordering(256) 00:14:10.447 fused_ordering(257) 00:14:10.447 fused_ordering(258) 00:14:10.447 fused_ordering(259) 00:14:10.447 fused_ordering(260) 00:14:10.447 fused_ordering(261) 00:14:10.447 fused_ordering(262) 00:14:10.447 fused_ordering(263) 00:14:10.447 fused_ordering(264) 00:14:10.447 fused_ordering(265) 00:14:10.447 fused_ordering(266) 00:14:10.447 fused_ordering(267) 00:14:10.447 fused_ordering(268) 00:14:10.447 fused_ordering(269) 00:14:10.447 fused_ordering(270) 00:14:10.447 fused_ordering(271) 00:14:10.447 fused_ordering(272) 00:14:10.447 fused_ordering(273) 00:14:10.447 fused_ordering(274) 00:14:10.447 fused_ordering(275) 00:14:10.447 fused_ordering(276) 00:14:10.447 fused_ordering(277) 00:14:10.447 fused_ordering(278) 00:14:10.447 fused_ordering(279) 00:14:10.447 fused_ordering(280) 00:14:10.447 fused_ordering(281) 00:14:10.447 fused_ordering(282) 00:14:10.447 fused_ordering(283) 00:14:10.447 fused_ordering(284) 00:14:10.447 fused_ordering(285) 00:14:10.447 fused_ordering(286) 00:14:10.447 fused_ordering(287) 00:14:10.447 fused_ordering(288) 00:14:10.447 fused_ordering(289) 00:14:10.447 fused_ordering(290) 00:14:10.447 fused_ordering(291) 00:14:10.447 fused_ordering(292) 00:14:10.447 fused_ordering(293) 00:14:10.447 fused_ordering(294) 00:14:10.447 fused_ordering(295) 00:14:10.447 fused_ordering(296) 00:14:10.447 fused_ordering(297) 00:14:10.447 fused_ordering(298) 00:14:10.447 fused_ordering(299) 00:14:10.447 fused_ordering(300) 00:14:10.447 fused_ordering(301) 00:14:10.447 fused_ordering(302) 00:14:10.447 fused_ordering(303) 00:14:10.447 fused_ordering(304) 00:14:10.447 fused_ordering(305) 00:14:10.447 fused_ordering(306) 00:14:10.447 fused_ordering(307) 00:14:10.447 fused_ordering(308) 00:14:10.447 fused_ordering(309) 00:14:10.447 fused_ordering(310) 00:14:10.447 fused_ordering(311) 00:14:10.447 fused_ordering(312) 00:14:10.447 fused_ordering(313) 00:14:10.447 fused_ordering(314) 00:14:10.447 fused_ordering(315) 00:14:10.447 fused_ordering(316) 00:14:10.447 fused_ordering(317) 00:14:10.447 fused_ordering(318) 00:14:10.447 fused_ordering(319) 00:14:10.447 fused_ordering(320) 00:14:10.447 fused_ordering(321) 00:14:10.447 fused_ordering(322) 00:14:10.447 fused_ordering(323) 00:14:10.447 fused_ordering(324) 00:14:10.447 fused_ordering(325) 00:14:10.447 fused_ordering(326) 00:14:10.447 fused_ordering(327) 00:14:10.447 fused_ordering(328) 00:14:10.447 fused_ordering(329) 00:14:10.447 fused_ordering(330) 00:14:10.447 fused_ordering(331) 00:14:10.447 fused_ordering(332) 00:14:10.447 fused_ordering(333) 00:14:10.447 fused_ordering(334) 00:14:10.447 fused_ordering(335) 00:14:10.447 fused_ordering(336) 00:14:10.447 fused_ordering(337) 00:14:10.447 fused_ordering(338) 00:14:10.447 fused_ordering(339) 00:14:10.448 fused_ordering(340) 00:14:10.448 fused_ordering(341) 00:14:10.448 fused_ordering(342) 00:14:10.448 fused_ordering(343) 00:14:10.448 fused_ordering(344) 00:14:10.448 fused_ordering(345) 00:14:10.448 fused_ordering(346) 00:14:10.448 fused_ordering(347) 00:14:10.448 fused_ordering(348) 00:14:10.448 fused_ordering(349) 00:14:10.448 fused_ordering(350) 00:14:10.448 fused_ordering(351) 00:14:10.448 fused_ordering(352) 00:14:10.448 fused_ordering(353) 00:14:10.448 fused_ordering(354) 00:14:10.448 fused_ordering(355) 00:14:10.448 fused_ordering(356) 00:14:10.448 fused_ordering(357) 00:14:10.448 fused_ordering(358) 00:14:10.448 fused_ordering(359) 00:14:10.448 fused_ordering(360) 00:14:10.448 fused_ordering(361) 00:14:10.448 fused_ordering(362) 00:14:10.448 fused_ordering(363) 00:14:10.448 fused_ordering(364) 00:14:10.448 fused_ordering(365) 00:14:10.448 fused_ordering(366) 00:14:10.448 fused_ordering(367) 00:14:10.448 fused_ordering(368) 00:14:10.448 fused_ordering(369) 00:14:10.448 fused_ordering(370) 00:14:10.448 fused_ordering(371) 00:14:10.448 fused_ordering(372) 00:14:10.448 fused_ordering(373) 00:14:10.448 fused_ordering(374) 00:14:10.448 fused_ordering(375) 00:14:10.448 fused_ordering(376) 00:14:10.448 fused_ordering(377) 00:14:10.448 fused_ordering(378) 00:14:10.448 fused_ordering(379) 00:14:10.448 fused_ordering(380) 00:14:10.448 fused_ordering(381) 00:14:10.448 fused_ordering(382) 00:14:10.448 fused_ordering(383) 00:14:10.448 fused_ordering(384) 00:14:10.448 fused_ordering(385) 00:14:10.448 fused_ordering(386) 00:14:10.448 fused_ordering(387) 00:14:10.448 fused_ordering(388) 00:14:10.448 fused_ordering(389) 00:14:10.448 fused_ordering(390) 00:14:10.448 fused_ordering(391) 00:14:10.448 fused_ordering(392) 00:14:10.448 fused_ordering(393) 00:14:10.448 fused_ordering(394) 00:14:10.448 fused_ordering(395) 00:14:10.448 fused_ordering(396) 00:14:10.448 fused_ordering(397) 00:14:10.448 fused_ordering(398) 00:14:10.448 fused_ordering(399) 00:14:10.448 fused_ordering(400) 00:14:10.448 fused_ordering(401) 00:14:10.448 fused_ordering(402) 00:14:10.448 fused_ordering(403) 00:14:10.448 fused_ordering(404) 00:14:10.448 fused_ordering(405) 00:14:10.448 fused_ordering(406) 00:14:10.448 fused_ordering(407) 00:14:10.448 fused_ordering(408) 00:14:10.448 fused_ordering(409) 00:14:10.448 fused_ordering(410) 00:14:11.014 fused_ordering(411) 00:14:11.014 fused_ordering(412) 00:14:11.014 fused_ordering(413) 00:14:11.014 fused_ordering(414) 00:14:11.014 fused_ordering(415) 00:14:11.014 fused_ordering(416) 00:14:11.014 fused_ordering(417) 00:14:11.014 fused_ordering(418) 00:14:11.014 fused_ordering(419) 00:14:11.014 fused_ordering(420) 00:14:11.014 fused_ordering(421) 00:14:11.014 fused_ordering(422) 00:14:11.014 fused_ordering(423) 00:14:11.014 fused_ordering(424) 00:14:11.014 fused_ordering(425) 00:14:11.014 fused_ordering(426) 00:14:11.014 fused_ordering(427) 00:14:11.014 fused_ordering(428) 00:14:11.014 fused_ordering(429) 00:14:11.014 fused_ordering(430) 00:14:11.014 fused_ordering(431) 00:14:11.014 fused_ordering(432) 00:14:11.014 fused_ordering(433) 00:14:11.014 fused_ordering(434) 00:14:11.014 fused_ordering(435) 00:14:11.014 fused_ordering(436) 00:14:11.014 fused_ordering(437) 00:14:11.014 fused_ordering(438) 00:14:11.014 fused_ordering(439) 00:14:11.014 fused_ordering(440) 00:14:11.014 fused_ordering(441) 00:14:11.014 fused_ordering(442) 00:14:11.014 fused_ordering(443) 00:14:11.014 fused_ordering(444) 00:14:11.014 fused_ordering(445) 00:14:11.014 fused_ordering(446) 00:14:11.014 fused_ordering(447) 00:14:11.014 fused_ordering(448) 00:14:11.014 fused_ordering(449) 00:14:11.014 fused_ordering(450) 00:14:11.014 fused_ordering(451) 00:14:11.014 fused_ordering(452) 00:14:11.014 fused_ordering(453) 00:14:11.014 fused_ordering(454) 00:14:11.014 fused_ordering(455) 00:14:11.014 fused_ordering(456) 00:14:11.014 fused_ordering(457) 00:14:11.014 fused_ordering(458) 00:14:11.014 fused_ordering(459) 00:14:11.014 fused_ordering(460) 00:14:11.014 fused_ordering(461) 00:14:11.014 fused_ordering(462) 00:14:11.014 fused_ordering(463) 00:14:11.014 fused_ordering(464) 00:14:11.014 fused_ordering(465) 00:14:11.014 fused_ordering(466) 00:14:11.014 fused_ordering(467) 00:14:11.014 fused_ordering(468) 00:14:11.014 fused_ordering(469) 00:14:11.014 fused_ordering(470) 00:14:11.014 fused_ordering(471) 00:14:11.014 fused_ordering(472) 00:14:11.014 fused_ordering(473) 00:14:11.014 fused_ordering(474) 00:14:11.014 fused_ordering(475) 00:14:11.014 fused_ordering(476) 00:14:11.014 fused_ordering(477) 00:14:11.014 fused_ordering(478) 00:14:11.014 fused_ordering(479) 00:14:11.014 fused_ordering(480) 00:14:11.014 fused_ordering(481) 00:14:11.014 fused_ordering(482) 00:14:11.014 fused_ordering(483) 00:14:11.014 fused_ordering(484) 00:14:11.014 fused_ordering(485) 00:14:11.014 fused_ordering(486) 00:14:11.014 fused_ordering(487) 00:14:11.014 fused_ordering(488) 00:14:11.014 fused_ordering(489) 00:14:11.014 fused_ordering(490) 00:14:11.014 fused_ordering(491) 00:14:11.014 fused_ordering(492) 00:14:11.014 fused_ordering(493) 00:14:11.014 fused_ordering(494) 00:14:11.014 fused_ordering(495) 00:14:11.014 fused_ordering(496) 00:14:11.014 fused_ordering(497) 00:14:11.014 fused_ordering(498) 00:14:11.014 fused_ordering(499) 00:14:11.014 fused_ordering(500) 00:14:11.014 fused_ordering(501) 00:14:11.014 fused_ordering(502) 00:14:11.014 fused_ordering(503) 00:14:11.014 fused_ordering(504) 00:14:11.014 fused_ordering(505) 00:14:11.014 fused_ordering(506) 00:14:11.014 fused_ordering(507) 00:14:11.014 fused_ordering(508) 00:14:11.014 fused_ordering(509) 00:14:11.014 fused_ordering(510) 00:14:11.014 fused_ordering(511) 00:14:11.014 fused_ordering(512) 00:14:11.014 fused_ordering(513) 00:14:11.014 fused_ordering(514) 00:14:11.014 fused_ordering(515) 00:14:11.014 fused_ordering(516) 00:14:11.014 fused_ordering(517) 00:14:11.014 fused_ordering(518) 00:14:11.014 fused_ordering(519) 00:14:11.014 fused_ordering(520) 00:14:11.014 fused_ordering(521) 00:14:11.014 fused_ordering(522) 00:14:11.014 fused_ordering(523) 00:14:11.014 fused_ordering(524) 00:14:11.014 fused_ordering(525) 00:14:11.014 fused_ordering(526) 00:14:11.014 fused_ordering(527) 00:14:11.014 fused_ordering(528) 00:14:11.014 fused_ordering(529) 00:14:11.014 fused_ordering(530) 00:14:11.014 fused_ordering(531) 00:14:11.014 fused_ordering(532) 00:14:11.014 fused_ordering(533) 00:14:11.014 fused_ordering(534) 00:14:11.014 fused_ordering(535) 00:14:11.014 fused_ordering(536) 00:14:11.014 fused_ordering(537) 00:14:11.014 fused_ordering(538) 00:14:11.014 fused_ordering(539) 00:14:11.014 fused_ordering(540) 00:14:11.014 fused_ordering(541) 00:14:11.014 fused_ordering(542) 00:14:11.014 fused_ordering(543) 00:14:11.014 fused_ordering(544) 00:14:11.014 fused_ordering(545) 00:14:11.014 fused_ordering(546) 00:14:11.014 fused_ordering(547) 00:14:11.014 fused_ordering(548) 00:14:11.014 fused_ordering(549) 00:14:11.014 fused_ordering(550) 00:14:11.014 fused_ordering(551) 00:14:11.014 fused_ordering(552) 00:14:11.014 fused_ordering(553) 00:14:11.014 fused_ordering(554) 00:14:11.014 fused_ordering(555) 00:14:11.014 fused_ordering(556) 00:14:11.014 fused_ordering(557) 00:14:11.014 fused_ordering(558) 00:14:11.014 fused_ordering(559) 00:14:11.014 fused_ordering(560) 00:14:11.014 fused_ordering(561) 00:14:11.014 fused_ordering(562) 00:14:11.014 fused_ordering(563) 00:14:11.014 fused_ordering(564) 00:14:11.014 fused_ordering(565) 00:14:11.014 fused_ordering(566) 00:14:11.014 fused_ordering(567) 00:14:11.014 fused_ordering(568) 00:14:11.014 fused_ordering(569) 00:14:11.014 fused_ordering(570) 00:14:11.014 fused_ordering(571) 00:14:11.014 fused_ordering(572) 00:14:11.014 fused_ordering(573) 00:14:11.014 fused_ordering(574) 00:14:11.014 fused_ordering(575) 00:14:11.014 fused_ordering(576) 00:14:11.014 fused_ordering(577) 00:14:11.014 fused_ordering(578) 00:14:11.014 fused_ordering(579) 00:14:11.014 fused_ordering(580) 00:14:11.014 fused_ordering(581) 00:14:11.014 fused_ordering(582) 00:14:11.014 fused_ordering(583) 00:14:11.014 fused_ordering(584) 00:14:11.014 fused_ordering(585) 00:14:11.014 fused_ordering(586) 00:14:11.014 fused_ordering(587) 00:14:11.014 fused_ordering(588) 00:14:11.014 fused_ordering(589) 00:14:11.014 fused_ordering(590) 00:14:11.014 fused_ordering(591) 00:14:11.014 fused_ordering(592) 00:14:11.014 fused_ordering(593) 00:14:11.014 fused_ordering(594) 00:14:11.014 fused_ordering(595) 00:14:11.014 fused_ordering(596) 00:14:11.014 fused_ordering(597) 00:14:11.014 fused_ordering(598) 00:14:11.014 fused_ordering(599) 00:14:11.014 fused_ordering(600) 00:14:11.014 fused_ordering(601) 00:14:11.014 fused_ordering(602) 00:14:11.014 fused_ordering(603) 00:14:11.014 fused_ordering(604) 00:14:11.014 fused_ordering(605) 00:14:11.014 fused_ordering(606) 00:14:11.014 fused_ordering(607) 00:14:11.014 fused_ordering(608) 00:14:11.014 fused_ordering(609) 00:14:11.014 fused_ordering(610) 00:14:11.014 fused_ordering(611) 00:14:11.014 fused_ordering(612) 00:14:11.014 fused_ordering(613) 00:14:11.014 fused_ordering(614) 00:14:11.014 fused_ordering(615) 00:14:11.947 fused_ordering(616) 00:14:11.947 fused_ordering(617) 00:14:11.947 fused_ordering(618) 00:14:11.947 fused_ordering(619) 00:14:11.947 fused_ordering(620) 00:14:11.947 fused_ordering(621) 00:14:11.947 fused_ordering(622) 00:14:11.947 fused_ordering(623) 00:14:11.947 fused_ordering(624) 00:14:11.947 fused_ordering(625) 00:14:11.947 fused_ordering(626) 00:14:11.947 fused_ordering(627) 00:14:11.947 fused_ordering(628) 00:14:11.947 fused_ordering(629) 00:14:11.947 fused_ordering(630) 00:14:11.947 fused_ordering(631) 00:14:11.947 fused_ordering(632) 00:14:11.947 fused_ordering(633) 00:14:11.947 fused_ordering(634) 00:14:11.947 fused_ordering(635) 00:14:11.947 fused_ordering(636) 00:14:11.947 fused_ordering(637) 00:14:11.947 fused_ordering(638) 00:14:11.947 fused_ordering(639) 00:14:11.947 fused_ordering(640) 00:14:11.947 fused_ordering(641) 00:14:11.947 fused_ordering(642) 00:14:11.947 fused_ordering(643) 00:14:11.947 fused_ordering(644) 00:14:11.947 fused_ordering(645) 00:14:11.947 fused_ordering(646) 00:14:11.947 fused_ordering(647) 00:14:11.947 fused_ordering(648) 00:14:11.947 fused_ordering(649) 00:14:11.947 fused_ordering(650) 00:14:11.947 fused_ordering(651) 00:14:11.947 fused_ordering(652) 00:14:11.947 fused_ordering(653) 00:14:11.947 fused_ordering(654) 00:14:11.947 fused_ordering(655) 00:14:11.947 fused_ordering(656) 00:14:11.947 fused_ordering(657) 00:14:11.947 fused_ordering(658) 00:14:11.947 fused_ordering(659) 00:14:11.947 fused_ordering(660) 00:14:11.947 fused_ordering(661) 00:14:11.947 fused_ordering(662) 00:14:11.947 fused_ordering(663) 00:14:11.947 fused_ordering(664) 00:14:11.947 fused_ordering(665) 00:14:11.947 fused_ordering(666) 00:14:11.947 fused_ordering(667) 00:14:11.947 fused_ordering(668) 00:14:11.947 fused_ordering(669) 00:14:11.947 fused_ordering(670) 00:14:11.947 fused_ordering(671) 00:14:11.947 fused_ordering(672) 00:14:11.947 fused_ordering(673) 00:14:11.947 fused_ordering(674) 00:14:11.947 fused_ordering(675) 00:14:11.947 fused_ordering(676) 00:14:11.947 fused_ordering(677) 00:14:11.947 fused_ordering(678) 00:14:11.947 fused_ordering(679) 00:14:11.947 fused_ordering(680) 00:14:11.947 fused_ordering(681) 00:14:11.947 fused_ordering(682) 00:14:11.947 fused_ordering(683) 00:14:11.947 fused_ordering(684) 00:14:11.947 fused_ordering(685) 00:14:11.947 fused_ordering(686) 00:14:11.947 fused_ordering(687) 00:14:11.947 fused_ordering(688) 00:14:11.947 fused_ordering(689) 00:14:11.947 fused_ordering(690) 00:14:11.947 fused_ordering(691) 00:14:11.947 fused_ordering(692) 00:14:11.947 fused_ordering(693) 00:14:11.947 fused_ordering(694) 00:14:11.947 fused_ordering(695) 00:14:11.947 fused_ordering(696) 00:14:11.947 fused_ordering(697) 00:14:11.947 fused_ordering(698) 00:14:11.947 fused_ordering(699) 00:14:11.947 fused_ordering(700) 00:14:11.947 fused_ordering(701) 00:14:11.947 fused_ordering(702) 00:14:11.947 fused_ordering(703) 00:14:11.947 fused_ordering(704) 00:14:11.947 fused_ordering(705) 00:14:11.947 fused_ordering(706) 00:14:11.947 fused_ordering(707) 00:14:11.947 fused_ordering(708) 00:14:11.947 fused_ordering(709) 00:14:11.947 fused_ordering(710) 00:14:11.947 fused_ordering(711) 00:14:11.947 fused_ordering(712) 00:14:11.948 fused_ordering(713) 00:14:11.948 fused_ordering(714) 00:14:11.948 fused_ordering(715) 00:14:11.948 fused_ordering(716) 00:14:11.948 fused_ordering(717) 00:14:11.948 fused_ordering(718) 00:14:11.948 fused_ordering(719) 00:14:11.948 fused_ordering(720) 00:14:11.948 fused_ordering(721) 00:14:11.948 fused_ordering(722) 00:14:11.948 fused_ordering(723) 00:14:11.948 fused_ordering(724) 00:14:11.948 fused_ordering(725) 00:14:11.948 fused_ordering(726) 00:14:11.948 fused_ordering(727) 00:14:11.948 fused_ordering(728) 00:14:11.948 fused_ordering(729) 00:14:11.948 fused_ordering(730) 00:14:11.948 fused_ordering(731) 00:14:11.948 fused_ordering(732) 00:14:11.948 fused_ordering(733) 00:14:11.948 fused_ordering(734) 00:14:11.948 fused_ordering(735) 00:14:11.948 fused_ordering(736) 00:14:11.948 fused_ordering(737) 00:14:11.948 fused_ordering(738) 00:14:11.948 fused_ordering(739) 00:14:11.948 fused_ordering(740) 00:14:11.948 fused_ordering(741) 00:14:11.948 fused_ordering(742) 00:14:11.948 fused_ordering(743) 00:14:11.948 fused_ordering(744) 00:14:11.948 fused_ordering(745) 00:14:11.948 fused_ordering(746) 00:14:11.948 fused_ordering(747) 00:14:11.948 fused_ordering(748) 00:14:11.948 fused_ordering(749) 00:14:11.948 fused_ordering(750) 00:14:11.948 fused_ordering(751) 00:14:11.948 fused_ordering(752) 00:14:11.948 fused_ordering(753) 00:14:11.948 fused_ordering(754) 00:14:11.948 fused_ordering(755) 00:14:11.948 fused_ordering(756) 00:14:11.948 fused_ordering(757) 00:14:11.948 fused_ordering(758) 00:14:11.948 fused_ordering(759) 00:14:11.948 fused_ordering(760) 00:14:11.948 fused_ordering(761) 00:14:11.948 fused_ordering(762) 00:14:11.948 fused_ordering(763) 00:14:11.948 fused_ordering(764) 00:14:11.948 fused_ordering(765) 00:14:11.948 fused_ordering(766) 00:14:11.948 fused_ordering(767) 00:14:11.948 fused_ordering(768) 00:14:11.948 fused_ordering(769) 00:14:11.948 fused_ordering(770) 00:14:11.948 fused_ordering(771) 00:14:11.948 fused_ordering(772) 00:14:11.948 fused_ordering(773) 00:14:11.948 fused_ordering(774) 00:14:11.948 fused_ordering(775) 00:14:11.948 fused_ordering(776) 00:14:11.948 fused_ordering(777) 00:14:11.948 fused_ordering(778) 00:14:11.948 fused_ordering(779) 00:14:11.948 fused_ordering(780) 00:14:11.948 fused_ordering(781) 00:14:11.948 fused_ordering(782) 00:14:11.948 fused_ordering(783) 00:14:11.948 fused_ordering(784) 00:14:11.948 fused_ordering(785) 00:14:11.948 fused_ordering(786) 00:14:11.948 fused_ordering(787) 00:14:11.948 fused_ordering(788) 00:14:11.948 fused_ordering(789) 00:14:11.948 fused_ordering(790) 00:14:11.948 fused_ordering(791) 00:14:11.948 fused_ordering(792) 00:14:11.948 fused_ordering(793) 00:14:11.948 fused_ordering(794) 00:14:11.948 fused_ordering(795) 00:14:11.948 fused_ordering(796) 00:14:11.948 fused_ordering(797) 00:14:11.948 fused_ordering(798) 00:14:11.948 fused_ordering(799) 00:14:11.948 fused_ordering(800) 00:14:11.948 fused_ordering(801) 00:14:11.948 fused_ordering(802) 00:14:11.948 fused_ordering(803) 00:14:11.948 fused_ordering(804) 00:14:11.948 fused_ordering(805) 00:14:11.948 fused_ordering(806) 00:14:11.948 fused_ordering(807) 00:14:11.948 fused_ordering(808) 00:14:11.948 fused_ordering(809) 00:14:11.948 fused_ordering(810) 00:14:11.948 fused_ordering(811) 00:14:11.948 fused_ordering(812) 00:14:11.948 fused_ordering(813) 00:14:11.948 fused_ordering(814) 00:14:11.948 fused_ordering(815) 00:14:11.948 fused_ordering(816) 00:14:11.948 fused_ordering(817) 00:14:11.948 fused_ordering(818) 00:14:11.948 fused_ordering(819) 00:14:11.948 fused_ordering(820) 00:14:12.882 fused_ordering(821) 00:14:12.882 fused_ordering(822) 00:14:12.882 fused_ordering(823) 00:14:12.882 fused_ordering(824) 00:14:12.882 fused_ordering(825) 00:14:12.882 fused_ordering(826) 00:14:12.882 fused_ordering(827) 00:14:12.882 fused_ordering(828) 00:14:12.882 fused_ordering(829) 00:14:12.882 fused_ordering(830) 00:14:12.882 fused_ordering(831) 00:14:12.882 fused_ordering(832) 00:14:12.882 fused_ordering(833) 00:14:12.882 fused_ordering(834) 00:14:12.882 fused_ordering(835) 00:14:12.882 fused_ordering(836) 00:14:12.882 fused_ordering(837) 00:14:12.882 fused_ordering(838) 00:14:12.882 fused_ordering(839) 00:14:12.882 fused_ordering(840) 00:14:12.882 fused_ordering(841) 00:14:12.882 fused_ordering(842) 00:14:12.882 fused_ordering(843) 00:14:12.882 fused_ordering(844) 00:14:12.882 fused_ordering(845) 00:14:12.882 fused_ordering(846) 00:14:12.882 fused_ordering(847) 00:14:12.882 fused_ordering(848) 00:14:12.882 fused_ordering(849) 00:14:12.882 fused_ordering(850) 00:14:12.882 fused_ordering(851) 00:14:12.882 fused_ordering(852) 00:14:12.882 fused_ordering(853) 00:14:12.882 fused_ordering(854) 00:14:12.882 fused_ordering(855) 00:14:12.882 fused_ordering(856) 00:14:12.882 fused_ordering(857) 00:14:12.882 fused_ordering(858) 00:14:12.882 fused_ordering(859) 00:14:12.882 fused_ordering(860) 00:14:12.882 fused_ordering(861) 00:14:12.882 fused_ordering(862) 00:14:12.882 fused_ordering(863) 00:14:12.882 fused_ordering(864) 00:14:12.882 fused_ordering(865) 00:14:12.882 fused_ordering(866) 00:14:12.882 fused_ordering(867) 00:14:12.882 fused_ordering(868) 00:14:12.882 fused_ordering(869) 00:14:12.882 fused_ordering(870) 00:14:12.882 fused_ordering(871) 00:14:12.882 fused_ordering(872) 00:14:12.882 fused_ordering(873) 00:14:12.882 fused_ordering(874) 00:14:12.882 fused_ordering(875) 00:14:12.882 fused_ordering(876) 00:14:12.882 fused_ordering(877) 00:14:12.882 fused_ordering(878) 00:14:12.882 fused_ordering(879) 00:14:12.882 fused_ordering(880) 00:14:12.882 fused_ordering(881) 00:14:12.882 fused_ordering(882) 00:14:12.882 fused_ordering(883) 00:14:12.882 fused_ordering(884) 00:14:12.882 fused_ordering(885) 00:14:12.882 fused_ordering(886) 00:14:12.882 fused_ordering(887) 00:14:12.882 fused_ordering(888) 00:14:12.882 fused_ordering(889) 00:14:12.882 fused_ordering(890) 00:14:12.882 fused_ordering(891) 00:14:12.882 fused_ordering(892) 00:14:12.882 fused_ordering(893) 00:14:12.882 fused_ordering(894) 00:14:12.882 fused_ordering(895) 00:14:12.882 fused_ordering(896) 00:14:12.882 fused_ordering(897) 00:14:12.882 fused_ordering(898) 00:14:12.882 fused_ordering(899) 00:14:12.882 fused_ordering(900) 00:14:12.882 fused_ordering(901) 00:14:12.882 fused_ordering(902) 00:14:12.882 fused_ordering(903) 00:14:12.882 fused_ordering(904) 00:14:12.882 fused_ordering(905) 00:14:12.882 fused_ordering(906) 00:14:12.882 fused_ordering(907) 00:14:12.882 fused_ordering(908) 00:14:12.882 fused_ordering(909) 00:14:12.882 fused_ordering(910) 00:14:12.882 fused_ordering(911) 00:14:12.882 fused_ordering(912) 00:14:12.882 fused_ordering(913) 00:14:12.882 fused_ordering(914) 00:14:12.882 fused_ordering(915) 00:14:12.882 fused_ordering(916) 00:14:12.882 fused_ordering(917) 00:14:12.882 fused_ordering(918) 00:14:12.882 fused_ordering(919) 00:14:12.882 fused_ordering(920) 00:14:12.882 fused_ordering(921) 00:14:12.882 fused_ordering(922) 00:14:12.882 fused_ordering(923) 00:14:12.882 fused_ordering(924) 00:14:12.882 fused_ordering(925) 00:14:12.882 fused_ordering(926) 00:14:12.882 fused_ordering(927) 00:14:12.882 fused_ordering(928) 00:14:12.882 fused_ordering(929) 00:14:12.882 fused_ordering(930) 00:14:12.882 fused_ordering(931) 00:14:12.882 fused_ordering(932) 00:14:12.882 fused_ordering(933) 00:14:12.882 fused_ordering(934) 00:14:12.882 fused_ordering(935) 00:14:12.882 fused_ordering(936) 00:14:12.882 fused_ordering(937) 00:14:12.882 fused_ordering(938) 00:14:12.882 fused_ordering(939) 00:14:12.882 fused_ordering(940) 00:14:12.882 fused_ordering(941) 00:14:12.882 fused_ordering(942) 00:14:12.882 fused_ordering(943) 00:14:12.882 fused_ordering(944) 00:14:12.882 fused_ordering(945) 00:14:12.883 fused_ordering(946) 00:14:12.883 fused_ordering(947) 00:14:12.883 fused_ordering(948) 00:14:12.883 fused_ordering(949) 00:14:12.883 fused_ordering(950) 00:14:12.883 fused_ordering(951) 00:14:12.883 fused_ordering(952) 00:14:12.883 fused_ordering(953) 00:14:12.883 fused_ordering(954) 00:14:12.883 fused_ordering(955) 00:14:12.883 fused_ordering(956) 00:14:12.883 fused_ordering(957) 00:14:12.883 fused_ordering(958) 00:14:12.883 fused_ordering(959) 00:14:12.883 fused_ordering(960) 00:14:12.883 fused_ordering(961) 00:14:12.883 fused_ordering(962) 00:14:12.883 fused_ordering(963) 00:14:12.883 fused_ordering(964) 00:14:12.883 fused_ordering(965) 00:14:12.883 fused_ordering(966) 00:14:12.883 fused_ordering(967) 00:14:12.883 fused_ordering(968) 00:14:12.883 fused_ordering(969) 00:14:12.883 fused_ordering(970) 00:14:12.883 fused_ordering(971) 00:14:12.883 fused_ordering(972) 00:14:12.883 fused_ordering(973) 00:14:12.883 fused_ordering(974) 00:14:12.883 fused_ordering(975) 00:14:12.883 fused_ordering(976) 00:14:12.883 fused_ordering(977) 00:14:12.883 fused_ordering(978) 00:14:12.883 fused_ordering(979) 00:14:12.883 fused_ordering(980) 00:14:12.883 fused_ordering(981) 00:14:12.883 fused_ordering(982) 00:14:12.883 fused_ordering(983) 00:14:12.883 fused_ordering(984) 00:14:12.883 fused_ordering(985) 00:14:12.883 fused_ordering(986) 00:14:12.883 fused_ordering(987) 00:14:12.883 fused_ordering(988) 00:14:12.883 fused_ordering(989) 00:14:12.883 fused_ordering(990) 00:14:12.883 fused_ordering(991) 00:14:12.883 fused_ordering(992) 00:14:12.883 fused_ordering(993) 00:14:12.883 fused_ordering(994) 00:14:12.883 fused_ordering(995) 00:14:12.883 fused_ordering(996) 00:14:12.883 fused_ordering(997) 00:14:12.883 fused_ordering(998) 00:14:12.883 fused_ordering(999) 00:14:12.883 fused_ordering(1000) 00:14:12.883 fused_ordering(1001) 00:14:12.883 fused_ordering(1002) 00:14:12.883 fused_ordering(1003) 00:14:12.883 fused_ordering(1004) 00:14:12.883 fused_ordering(1005) 00:14:12.883 fused_ordering(1006) 00:14:12.883 fused_ordering(1007) 00:14:12.883 fused_ordering(1008) 00:14:12.883 fused_ordering(1009) 00:14:12.883 fused_ordering(1010) 00:14:12.883 fused_ordering(1011) 00:14:12.883 fused_ordering(1012) 00:14:12.883 fused_ordering(1013) 00:14:12.883 fused_ordering(1014) 00:14:12.883 fused_ordering(1015) 00:14:12.883 fused_ordering(1016) 00:14:12.883 fused_ordering(1017) 00:14:12.883 fused_ordering(1018) 00:14:12.883 fused_ordering(1019) 00:14:12.883 fused_ordering(1020) 00:14:12.883 fused_ordering(1021) 00:14:12.883 fused_ordering(1022) 00:14:12.883 fused_ordering(1023) 00:14:12.883 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:12.883 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:12.883 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:12.883 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:12.883 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:12.883 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:12.883 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:12.883 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:12.883 rmmod nvme_tcp 00:14:12.883 rmmod nvme_fabrics 00:14:12.883 rmmod nvme_keyring 00:14:12.883 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:12.883 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:12.883 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:12.883 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2018400 ']' 00:14:12.883 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2018400 00:14:12.883 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 2018400 ']' 00:14:12.883 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 2018400 00:14:12.883 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:14:12.883 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:12.883 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2018400 00:14:12.883 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:12.883 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:12.883 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2018400' 00:14:12.883 killing process with pid 2018400 00:14:12.883 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 2018400 00:14:12.883 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 2018400 00:14:13.141 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:13.142 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:13.142 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:13.142 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:13.142 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:13.142 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.142 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:13.142 20:08:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.674 20:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:15.674 00:14:15.674 real 0m9.952s 00:14:15.674 user 0m7.153s 00:14:15.674 sys 0m5.038s 00:14:15.674 20:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:15.674 20:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.674 ************************************ 00:14:15.674 END TEST nvmf_fused_ordering 00:14:15.674 ************************************ 00:14:15.674 20:08:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:15.674 20:08:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:15.674 20:08:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:15.675 20:08:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:15.675 ************************************ 00:14:15.675 START TEST nvmf_ns_masking 00:14:15.675 ************************************ 00:14:15.675 20:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:15.675 * Looking for test storage... 00:14:15.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=b91d33c6-e31a-4c0e-8c49-a531c485be66 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=51f8c755-928b-4fed-b010-d0ac85a1cd4e 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=aca9baf5-df64-4304-95ff-4638d0ca6fd6 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:15.675 20:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:18.206 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:18.206 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:18.206 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:18.206 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:18.206 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:18.206 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:18.206 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:18.206 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:18.206 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:18.206 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:18.206 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:18.206 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:18.206 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:18.206 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:18.206 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:18.206 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:18.206 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:18.206 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:18.206 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:18.206 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:18.206 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:18.206 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:18.206 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:18.207 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:18.207 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:18.207 Found net devices under 0000:84:00.0: cvl_0_0 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:18.207 Found net devices under 0000:84:00.1: cvl_0_1 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:18.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:14:18.207 00:14:18.207 --- 10.0.0.2 ping statistics --- 00:14:18.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.207 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:18.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:14:18.207 00:14:18.207 --- 10.0.0.1 ping statistics --- 00:14:18.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.207 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2020903 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2020903 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2020903 ']' 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.207 20:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:18.466 [2024-07-24 20:08:22.012761] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:14:18.466 [2024-07-24 20:08:22.012847] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.466 EAL: No free 2048 kB hugepages reported on node 1 00:14:18.466 [2024-07-24 20:08:22.097723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.466 [2024-07-24 20:08:22.242810] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.466 [2024-07-24 20:08:22.242883] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.466 [2024-07-24 20:08:22.242904] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.466 [2024-07-24 20:08:22.242921] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.466 [2024-07-24 20:08:22.242935] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.466 [2024-07-24 20:08:22.242973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.725 20:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:18.725 20:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:18.725 20:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:18.725 20:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:18.725 20:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:18.725 20:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.725 20:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:19.293 [2024-07-24 20:08:22.905841] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.293 20:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:19.293 20:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:19.293 20:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:19.552 Malloc1 00:14:19.552 20:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:20.118 Malloc2 00:14:20.118 20:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:20.375 20:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:20.633 20:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.201 [2024-07-24 20:08:24.873389] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.201 20:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:21.201 20:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I aca9baf5-df64-4304-95ff-4638d0ca6fd6 -a 10.0.0.2 -s 4420 -i 4 00:14:21.460 20:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:21.460 20:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:21.460 20:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:21.460 20:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:21.460 20:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:23.361 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:23.361 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:23.361 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:23.361 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:23.361 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:23.361 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:23.361 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:23.361 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:23.619 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:23.619 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:23.619 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:23.619 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.619 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:23.619 [ 0]:0x1 00:14:23.619 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:23.619 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:23.619 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e645a62684154ad4870b3e54d1bf110c 00:14:23.619 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e645a62684154ad4870b3e54d1bf110c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.619 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:24.186 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:24.186 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:24.186 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:24.186 [ 0]:0x1 00:14:24.186 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:24.186 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.186 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e645a62684154ad4870b3e54d1bf110c 00:14:24.186 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e645a62684154ad4870b3e54d1bf110c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.186 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:24.186 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:24.186 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:24.186 [ 1]:0x2 00:14:24.186 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:24.186 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.186 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=739ca3d5c7c543c7a0bbbeea61bec348 00:14:24.186 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 739ca3d5c7c543c7a0bbbeea61bec348 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.186 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:24.186 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:24.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.186 20:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.753 20:08:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:25.320 20:08:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:25.320 20:08:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I aca9baf5-df64-4304-95ff-4638d0ca6fd6 -a 10.0.0.2 -s 4420 -i 4 00:14:25.578 20:08:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:25.578 20:08:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:25.578 20:08:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:25.578 20:08:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:25.578 20:08:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:25.578 20:08:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:27.481 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:27.481 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:27.481 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:27.481 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:27.481 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:27.481 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:27.481 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:27.481 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:27.481 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:27.481 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:27.481 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:27.481 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:27.481 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:27.481 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:27.481 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.481 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:27.481 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.481 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:27.481 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.481 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:27.481 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:27.481 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.741 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:27.741 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.741 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:27.741 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:27.741 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:27.741 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:27.741 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:27.741 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:27.741 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.741 [ 0]:0x2 00:14:27.741 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:27.741 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.741 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=739ca3d5c7c543c7a0bbbeea61bec348 00:14:27.741 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 739ca3d5c7c543c7a0bbbeea61bec348 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.741 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:28.012 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:28.012 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.012 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:28.012 [ 0]:0x1 00:14:28.012 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:28.012 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.012 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e645a62684154ad4870b3e54d1bf110c 00:14:28.012 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e645a62684154ad4870b3e54d1bf110c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.012 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:28.012 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.012 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:28.012 [ 1]:0x2 00:14:28.284 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:28.284 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.284 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=739ca3d5c7c543c7a0bbbeea61bec348 00:14:28.284 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 739ca3d5c7c543c7a0bbbeea61bec348 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.284 20:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:28.543 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:28.543 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:28.543 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:28.543 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:28.543 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.543 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:28.543 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.543 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:28.543 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.543 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:28.543 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:28.543 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.543 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:28.543 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.543 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:28.543 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:28.543 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:28.543 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:28.543 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:28.543 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.543 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:28.543 [ 0]:0x2 00:14:28.543 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:28.543 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.802 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=739ca3d5c7c543c7a0bbbeea61bec348 00:14:28.802 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 739ca3d5c7c543c7a0bbbeea61bec348 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.802 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:28.802 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:28.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.802 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:29.369 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:29.369 20:08:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I aca9baf5-df64-4304-95ff-4638d0ca6fd6 -a 10.0.0.2 -s 4420 -i 4 00:14:29.369 20:08:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:29.369 20:08:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:29.369 20:08:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:29.369 20:08:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:29.369 20:08:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:29.369 20:08:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:31.276 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:31.276 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:31.276 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:31.536 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:31.536 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:31.536 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:31.536 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:31.536 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:31.536 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:31.536 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:31.536 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:31.536 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:31.536 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:31.536 [ 0]:0x1 00:14:31.536 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:31.536 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:31.536 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e645a62684154ad4870b3e54d1bf110c 00:14:31.536 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e645a62684154ad4870b3e54d1bf110c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.536 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:31.536 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:31.536 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:31.536 [ 1]:0x2 00:14:31.536 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:31.536 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:31.536 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=739ca3d5c7c543c7a0bbbeea61bec348 00:14:31.536 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 739ca3d5c7c543c7a0bbbeea61bec348 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.536 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:32.102 [ 0]:0x2 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=739ca3d5c7c543c7a0bbbeea61bec348 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 739ca3d5c7c543c7a0bbbeea61bec348 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:32.102 20:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:32.669 [2024-07-24 20:08:36.190204] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:32.669 request: 00:14:32.669 { 00:14:32.669 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.669 "nsid": 2, 00:14:32.669 "host": "nqn.2016-06.io.spdk:host1", 00:14:32.669 "method": "nvmf_ns_remove_host", 00:14:32.669 "req_id": 1 00:14:32.669 } 00:14:32.669 Got JSON-RPC error response 00:14:32.669 response: 00:14:32.669 { 00:14:32.669 "code": -32602, 00:14:32.669 "message": "Invalid parameters" 00:14:32.669 } 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:32.669 [ 0]:0x2 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=739ca3d5c7c543c7a0bbbeea61bec348 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 739ca3d5c7c543c7a0bbbeea61bec348 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:32.669 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:32.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.928 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2022774 00:14:32.928 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:32.928 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.928 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2022774 /var/tmp/host.sock 00:14:32.928 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2022774 ']' 00:14:32.928 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:32.928 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:32.928 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:32.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:32.928 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:32.928 20:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:32.928 [2024-07-24 20:08:36.556776] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:14:32.928 [2024-07-24 20:08:36.556888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2022774 ] 00:14:32.928 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.928 [2024-07-24 20:08:36.642099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.186 [2024-07-24 20:08:36.785187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.445 20:08:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:33.445 20:08:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:33.445 20:08:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.703 20:08:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:33.961 20:08:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid b91d33c6-e31a-4c0e-8c49-a531c485be66 00:14:33.961 20:08:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:33.961 20:08:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g B91D33C6E31A4C0E8C49A531C485BE66 -i 00:14:34.525 20:08:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 51f8c755-928b-4fed-b010-d0ac85a1cd4e 00:14:34.525 20:08:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:34.525 20:08:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 51F8C755928B4FEDB010D0AC85A1CD4E -i 00:14:35.091 20:08:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:35.348 20:08:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:35.914 20:08:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:35.914 20:08:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:36.479 nvme0n1 00:14:36.479 20:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:36.479 20:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:37.046 nvme1n2 00:14:37.046 20:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:37.046 20:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:37.046 20:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:37.046 20:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:37.046 20:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:37.611 20:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:37.611 20:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:37.611 20:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:37.611 20:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:37.869 20:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ b91d33c6-e31a-4c0e-8c49-a531c485be66 == \b\9\1\d\3\3\c\6\-\e\3\1\a\-\4\c\0\e\-\8\c\4\9\-\a\5\3\1\c\4\8\5\b\e\6\6 ]] 00:14:37.869 20:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:37.869 20:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:37.869 20:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:38.126 20:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 51f8c755-928b-4fed-b010-d0ac85a1cd4e == \5\1\f\8\c\7\5\5\-\9\2\8\b\-\4\f\e\d\-\b\0\1\0\-\d\0\a\c\8\5\a\1\c\d\4\e ]] 00:14:38.126 20:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2022774 00:14:38.126 20:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2022774 ']' 00:14:38.126 20:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2022774 00:14:38.126 20:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:38.126 20:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:38.126 20:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2022774 00:14:38.126 20:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:38.126 20:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:38.126 20:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2022774' 00:14:38.126 killing process with pid 2022774 00:14:38.126 20:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2022774 00:14:38.126 20:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2022774 00:14:38.692 20:08:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.951 20:08:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:38.951 20:08:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:38.951 20:08:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:38.951 20:08:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:38.951 20:08:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:38.951 20:08:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:38.951 20:08:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:38.951 20:08:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:38.951 rmmod nvme_tcp 00:14:39.209 rmmod nvme_fabrics 00:14:39.209 rmmod nvme_keyring 00:14:39.209 20:08:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:39.209 20:08:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:39.209 20:08:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:39.209 20:08:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2020903 ']' 00:14:39.209 20:08:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2020903 00:14:39.209 20:08:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2020903 ']' 00:14:39.209 20:08:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2020903 00:14:39.209 20:08:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:39.209 20:08:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.209 20:08:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2020903 00:14:39.209 20:08:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:39.209 20:08:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:39.209 20:08:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2020903' 00:14:39.209 killing process with pid 2020903 00:14:39.209 20:08:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2020903 00:14:39.209 20:08:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2020903 00:14:39.776 20:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:39.776 20:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:39.776 20:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:39.776 20:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:39.776 20:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:39.776 20:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.776 20:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:39.776 20:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.675 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:41.675 00:14:41.675 real 0m26.397s 00:14:41.675 user 0m37.460s 00:14:41.675 sys 0m5.548s 00:14:41.675 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:41.675 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:41.675 ************************************ 00:14:41.675 END TEST nvmf_ns_masking 00:14:41.675 ************************************ 00:14:41.675 20:08:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:41.675 20:08:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:41.675 20:08:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:41.675 20:08:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:41.675 20:08:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:41.675 ************************************ 00:14:41.675 START TEST nvmf_nvme_cli 00:14:41.675 ************************************ 00:14:41.675 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:41.675 * Looking for test storage... 00:14:41.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:41.935 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:41.936 20:08:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:44.480 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:44.480 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:44.480 Found net devices under 0000:84:00.0: cvl_0_0 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:44.480 Found net devices under 0000:84:00.1: cvl_0_1 00:14:44.480 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.481 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:44.481 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:44.481 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:44.481 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:44.481 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:44.481 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:44.481 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:44.481 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:44.481 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:44.481 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:44.481 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:44.481 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:44.481 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:44.481 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:44.481 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:44.481 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:44.481 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:44.481 20:08:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:44.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:44.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:14:44.481 00:14:44.481 --- 10.0.0.2 ping statistics --- 00:14:44.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.481 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:44.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:44.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:14:44.481 00:14:44.481 --- 10.0.0.1 ping statistics --- 00:14:44.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.481 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2025538 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2025538 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 2025538 ']' 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:44.481 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:44.481 [2024-07-24 20:08:48.230564] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:14:44.481 [2024-07-24 20:08:48.230676] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.760 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.760 [2024-07-24 20:08:48.355848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:45.019 [2024-07-24 20:08:48.548732] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.019 [2024-07-24 20:08:48.548827] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.019 [2024-07-24 20:08:48.548862] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.019 [2024-07-24 20:08:48.548890] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.019 [2024-07-24 20:08:48.548917] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.019 [2024-07-24 20:08:48.549050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.019 [2024-07-24 20:08:48.549108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.019 [2024-07-24 20:08:48.552454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:45.019 [2024-07-24 20:08:48.552485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.019 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:45.019 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:45.019 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:45.019 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:45.019 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.019 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.019 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:45.019 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.019 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.019 [2024-07-24 20:08:48.735538] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.019 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.019 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:45.019 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.019 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.019 Malloc0 00:14:45.019 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.019 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:45.019 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.019 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.019 Malloc1 00:14:45.019 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.019 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:45.019 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.019 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.277 [2024-07-24 20:08:48.825813] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:14:45.277 00:14:45.277 Discovery Log Number of Records 2, Generation counter 2 00:14:45.277 =====Discovery Log Entry 0====== 00:14:45.277 trtype: tcp 00:14:45.277 adrfam: ipv4 00:14:45.277 subtype: current discovery subsystem 00:14:45.277 treq: not required 00:14:45.277 portid: 0 00:14:45.277 trsvcid: 4420 00:14:45.277 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:45.277 traddr: 10.0.0.2 00:14:45.277 eflags: explicit discovery connections, duplicate discovery information 00:14:45.277 sectype: none 00:14:45.277 =====Discovery Log Entry 1====== 00:14:45.277 trtype: tcp 00:14:45.277 adrfam: ipv4 00:14:45.277 subtype: nvme subsystem 00:14:45.277 treq: not required 00:14:45.277 portid: 0 00:14:45.277 trsvcid: 4420 00:14:45.277 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:45.277 traddr: 10.0.0.2 00:14:45.277 eflags: none 00:14:45.277 sectype: none 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:45.277 20:08:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:46.211 20:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:46.211 20:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:46.211 20:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:46.211 20:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:46.211 20:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:46.211 20:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:48.110 /dev/nvme0n1 ]] 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:48.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:48.110 rmmod nvme_tcp 00:14:48.110 rmmod nvme_fabrics 00:14:48.110 rmmod nvme_keyring 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2025538 ']' 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2025538 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 2025538 ']' 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 2025538 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:48.110 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2025538 00:14:48.369 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:48.369 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:48.369 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2025538' 00:14:48.369 killing process with pid 2025538 00:14:48.369 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 2025538 00:14:48.369 20:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 2025538 00:14:48.628 20:08:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:48.628 20:08:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:48.628 20:08:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:48.628 20:08:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:48.628 20:08:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:48.628 20:08:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.628 20:08:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.628 20:08:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:51.161 00:14:51.161 real 0m9.044s 00:14:51.161 user 0m15.705s 00:14:51.161 sys 0m2.716s 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.161 ************************************ 00:14:51.161 END TEST nvmf_nvme_cli 00:14:51.161 ************************************ 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:51.161 ************************************ 00:14:51.161 START TEST nvmf_vfio_user 00:14:51.161 ************************************ 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:51.161 * Looking for test storage... 00:14:51.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2026346 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2026346' 00:14:51.161 Process pid: 2026346 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2026346 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2026346 ']' 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:51.161 20:08:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:51.161 [2024-07-24 20:08:54.671884] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:14:51.161 [2024-07-24 20:08:54.672000] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.161 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.161 [2024-07-24 20:08:54.759799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:51.161 [2024-07-24 20:08:54.902973] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.162 [2024-07-24 20:08:54.903043] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.162 [2024-07-24 20:08:54.903063] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.162 [2024-07-24 20:08:54.903079] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.162 [2024-07-24 20:08:54.903092] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.162 [2024-07-24 20:08:54.904460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.162 [2024-07-24 20:08:54.904509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.162 [2024-07-24 20:08:54.904542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:51.162 [2024-07-24 20:08:54.904546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.420 20:08:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:51.420 20:08:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:51.420 20:08:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:52.353 20:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:52.920 20:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:52.920 20:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:52.920 20:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:52.920 20:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:52.920 20:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:53.178 Malloc1 00:14:53.178 20:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:53.743 20:08:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:54.001 20:08:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:54.260 20:08:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:54.260 20:08:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:54.260 20:08:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:54.829 Malloc2 00:14:54.829 20:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:55.394 20:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:55.973 20:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:56.232 20:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:56.232 20:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:56.232 20:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:56.232 20:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:56.232 20:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:56.232 20:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:56.232 [2024-07-24 20:08:59.941994] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:14:56.232 [2024-07-24 20:08:59.942041] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2027020 ] 00:14:56.232 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.232 [2024-07-24 20:08:59.984920] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:56.232 [2024-07-24 20:08:59.989637] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:56.232 [2024-07-24 20:08:59.989677] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f29e3dc1000 00:14:56.232 [2024-07-24 20:08:59.990626] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:56.232 [2024-07-24 20:08:59.991615] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:56.232 [2024-07-24 20:08:59.992625] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:56.232 [2024-07-24 20:08:59.993631] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:56.232 [2024-07-24 20:08:59.994635] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:56.232 [2024-07-24 20:08:59.995659] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:56.232 [2024-07-24 20:08:59.996643] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:56.232 [2024-07-24 20:08:59.997647] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:56.232 [2024-07-24 20:08:59.998655] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:56.232 [2024-07-24 20:08:59.998682] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f29e3db6000 00:14:56.232 [2024-07-24 20:09:00.000276] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:56.492 [2024-07-24 20:09:00.022417] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:56.492 [2024-07-24 20:09:00.022494] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:56.492 [2024-07-24 20:09:00.024854] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:56.492 [2024-07-24 20:09:00.024935] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:56.492 [2024-07-24 20:09:00.025075] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:56.492 [2024-07-24 20:09:00.025116] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:56.492 [2024-07-24 20:09:00.025132] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:56.492 [2024-07-24 20:09:00.025833] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:56.492 [2024-07-24 20:09:00.025870] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:56.492 [2024-07-24 20:09:00.025889] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:56.492 [2024-07-24 20:09:00.026847] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:56.492 [2024-07-24 20:09:00.026875] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:56.492 [2024-07-24 20:09:00.026894] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:56.492 [2024-07-24 20:09:00.027849] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:56.492 [2024-07-24 20:09:00.027875] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:56.492 [2024-07-24 20:09:00.028863] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:56.492 [2024-07-24 20:09:00.028889] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:56.493 [2024-07-24 20:09:00.028902] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:56.493 [2024-07-24 20:09:00.028918] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:56.493 [2024-07-24 20:09:00.029031] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:56.493 [2024-07-24 20:09:00.029043] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:56.493 [2024-07-24 20:09:00.029055] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:56.493 [2024-07-24 20:09:00.029894] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:56.493 [2024-07-24 20:09:00.033444] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:56.493 [2024-07-24 20:09:00.033918] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:56.493 [2024-07-24 20:09:00.034911] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:56.493 [2024-07-24 20:09:00.035122] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:56.493 [2024-07-24 20:09:00.035942] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:56.493 [2024-07-24 20:09:00.035968] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:56.493 [2024-07-24 20:09:00.035981] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:56.493 [2024-07-24 20:09:00.036016] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:56.493 [2024-07-24 20:09:00.036043] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:56.493 [2024-07-24 20:09:00.036080] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:56.493 [2024-07-24 20:09:00.036095] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:56.493 [2024-07-24 20:09:00.036105] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:56.493 [2024-07-24 20:09:00.036132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:56.493 [2024-07-24 20:09:00.036230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:56.493 [2024-07-24 20:09:00.036255] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:56.493 [2024-07-24 20:09:00.036267] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:56.493 [2024-07-24 20:09:00.036278] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:56.493 [2024-07-24 20:09:00.036289] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:56.493 [2024-07-24 20:09:00.036300] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:56.493 [2024-07-24 20:09:00.036311] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:56.493 [2024-07-24 20:09:00.036322] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:56.493 [2024-07-24 20:09:00.036340] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:56.493 [2024-07-24 20:09:00.036366] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:56.493 [2024-07-24 20:09:00.036398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:56.493 [2024-07-24 20:09:00.036439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.493 [2024-07-24 20:09:00.036461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.493 [2024-07-24 20:09:00.036478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.493 [2024-07-24 20:09:00.036495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.493 [2024-07-24 20:09:00.036508] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:56.493 [2024-07-24 20:09:00.036535] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:56.493 [2024-07-24 20:09:00.036556] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:56.493 [2024-07-24 20:09:00.036573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:56.493 [2024-07-24 20:09:00.036588] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:56.493 [2024-07-24 20:09:00.036600] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:56.493 [2024-07-24 20:09:00.036624] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:56.493 [2024-07-24 20:09:00.036640] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:56.493 [2024-07-24 20:09:00.036658] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:56.493 [2024-07-24 20:09:00.036675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:56.493 [2024-07-24 20:09:00.036768] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:56.493 [2024-07-24 20:09:00.036791] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:56.493 [2024-07-24 20:09:00.036811] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:56.493 [2024-07-24 20:09:00.036823] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:56.493 [2024-07-24 20:09:00.036832] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:56.493 [2024-07-24 20:09:00.036845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:56.493 [2024-07-24 20:09:00.036871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:56.493 [2024-07-24 20:09:00.036895] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:56.493 [2024-07-24 20:09:00.036917] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:56.493 [2024-07-24 20:09:00.036938] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:56.493 [2024-07-24 20:09:00.036955] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:56.493 [2024-07-24 20:09:00.036967] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:56.493 [2024-07-24 20:09:00.036975] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:56.493 [2024-07-24 20:09:00.036989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:56.493 [2024-07-24 20:09:00.037028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:56.493 [2024-07-24 20:09:00.037059] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:56.493 [2024-07-24 20:09:00.037085] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:56.493 [2024-07-24 20:09:00.037103] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:56.493 [2024-07-24 20:09:00.037115] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:56.493 [2024-07-24 20:09:00.037124] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:56.493 [2024-07-24 20:09:00.037137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:56.493 [2024-07-24 20:09:00.037153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:56.493 [2024-07-24 20:09:00.037174] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:56.493 [2024-07-24 20:09:00.037191] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:56.493 [2024-07-24 20:09:00.037211] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:56.493 [2024-07-24 20:09:00.037229] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:56.493 [2024-07-24 20:09:00.037241] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:56.493 [2024-07-24 20:09:00.037254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:56.493 [2024-07-24 20:09:00.037266] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:56.493 [2024-07-24 20:09:00.037277] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:56.493 [2024-07-24 20:09:00.037289] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:56.493 [2024-07-24 20:09:00.037324] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:56.493 [2024-07-24 20:09:00.037349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:56.493 [2024-07-24 20:09:00.037376] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:56.494 [2024-07-24 20:09:00.037394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:56.494 [2024-07-24 20:09:00.037416] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:56.494 [2024-07-24 20:09:00.037445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:56.494 [2024-07-24 20:09:00.037470] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:56.494 [2024-07-24 20:09:00.037488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:56.494 [2024-07-24 20:09:00.037520] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:56.494 [2024-07-24 20:09:00.037535] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:56.494 [2024-07-24 20:09:00.037543] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:56.494 [2024-07-24 20:09:00.037556] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:56.494 [2024-07-24 20:09:00.037565] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:56.494 [2024-07-24 20:09:00.037578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:56.494 [2024-07-24 20:09:00.037595] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:56.494 [2024-07-24 20:09:00.037607] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:56.494 [2024-07-24 20:09:00.037615] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:56.494 [2024-07-24 20:09:00.037628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:56.494 [2024-07-24 20:09:00.037644] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:56.494 [2024-07-24 20:09:00.037655] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:56.494 [2024-07-24 20:09:00.037664] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:56.494 [2024-07-24 20:09:00.037676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:56.494 [2024-07-24 20:09:00.037693] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:56.494 [2024-07-24 20:09:00.037705] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:56.494 [2024-07-24 20:09:00.037714] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:56.494 [2024-07-24 20:09:00.037726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:56.494 [2024-07-24 20:09:00.037743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:56.494 [2024-07-24 20:09:00.037771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:56.494 [2024-07-24 20:09:00.037795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:56.494 [2024-07-24 20:09:00.037813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:56.494 ===================================================== 00:14:56.494 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:56.494 ===================================================== 00:14:56.494 Controller Capabilities/Features 00:14:56.494 ================================ 00:14:56.494 Vendor ID: 4e58 00:14:56.494 Subsystem Vendor ID: 4e58 00:14:56.494 Serial Number: SPDK1 00:14:56.494 Model Number: SPDK bdev Controller 00:14:56.494 Firmware Version: 24.09 00:14:56.494 Recommended Arb Burst: 6 00:14:56.494 IEEE OUI Identifier: 8d 6b 50 00:14:56.494 Multi-path I/O 00:14:56.494 May have multiple subsystem ports: Yes 00:14:56.494 May have multiple controllers: Yes 00:14:56.494 Associated with SR-IOV VF: No 00:14:56.494 Max Data Transfer Size: 131072 00:14:56.494 Max Number of Namespaces: 32 00:14:56.494 Max Number of I/O Queues: 127 00:14:56.494 NVMe Specification Version (VS): 1.3 00:14:56.494 NVMe Specification Version (Identify): 1.3 00:14:56.494 Maximum Queue Entries: 256 00:14:56.494 Contiguous Queues Required: Yes 00:14:56.494 Arbitration Mechanisms Supported 00:14:56.494 Weighted Round Robin: Not Supported 00:14:56.494 Vendor Specific: Not Supported 00:14:56.494 Reset Timeout: 15000 ms 00:14:56.494 Doorbell Stride: 4 bytes 00:14:56.494 NVM Subsystem Reset: Not Supported 00:14:56.494 Command Sets Supported 00:14:56.494 NVM Command Set: Supported 00:14:56.494 Boot Partition: Not Supported 00:14:56.494 Memory Page Size Minimum: 4096 bytes 00:14:56.494 Memory Page Size Maximum: 4096 bytes 00:14:56.494 Persistent Memory Region: Not Supported 00:14:56.494 Optional Asynchronous Events Supported 00:14:56.494 Namespace Attribute Notices: Supported 00:14:56.494 Firmware Activation Notices: Not Supported 00:14:56.494 ANA Change Notices: Not Supported 00:14:56.494 PLE Aggregate Log Change Notices: Not Supported 00:14:56.494 LBA Status Info Alert Notices: Not Supported 00:14:56.494 EGE Aggregate Log Change Notices: Not Supported 00:14:56.494 Normal NVM Subsystem Shutdown event: Not Supported 00:14:56.494 Zone Descriptor Change Notices: Not Supported 00:14:56.494 Discovery Log Change Notices: Not Supported 00:14:56.494 Controller Attributes 00:14:56.494 128-bit Host Identifier: Supported 00:14:56.494 Non-Operational Permissive Mode: Not Supported 00:14:56.494 NVM Sets: Not Supported 00:14:56.494 Read Recovery Levels: Not Supported 00:14:56.494 Endurance Groups: Not Supported 00:14:56.494 Predictable Latency Mode: Not Supported 00:14:56.494 Traffic Based Keep ALive: Not Supported 00:14:56.494 Namespace Granularity: Not Supported 00:14:56.494 SQ Associations: Not Supported 00:14:56.494 UUID List: Not Supported 00:14:56.494 Multi-Domain Subsystem: Not Supported 00:14:56.494 Fixed Capacity Management: Not Supported 00:14:56.494 Variable Capacity Management: Not Supported 00:14:56.494 Delete Endurance Group: Not Supported 00:14:56.494 Delete NVM Set: Not Supported 00:14:56.494 Extended LBA Formats Supported: Not Supported 00:14:56.494 Flexible Data Placement Supported: Not Supported 00:14:56.494 00:14:56.494 Controller Memory Buffer Support 00:14:56.494 ================================ 00:14:56.494 Supported: No 00:14:56.494 00:14:56.494 Persistent Memory Region Support 00:14:56.494 ================================ 00:14:56.494 Supported: No 00:14:56.494 00:14:56.494 Admin Command Set Attributes 00:14:56.494 ============================ 00:14:56.494 Security Send/Receive: Not Supported 00:14:56.494 Format NVM: Not Supported 00:14:56.494 Firmware Activate/Download: Not Supported 00:14:56.494 Namespace Management: Not Supported 00:14:56.494 Device Self-Test: Not Supported 00:14:56.494 Directives: Not Supported 00:14:56.494 NVMe-MI: Not Supported 00:14:56.494 Virtualization Management: Not Supported 00:14:56.494 Doorbell Buffer Config: Not Supported 00:14:56.494 Get LBA Status Capability: Not Supported 00:14:56.494 Command & Feature Lockdown Capability: Not Supported 00:14:56.494 Abort Command Limit: 4 00:14:56.494 Async Event Request Limit: 4 00:14:56.494 Number of Firmware Slots: N/A 00:14:56.494 Firmware Slot 1 Read-Only: N/A 00:14:56.494 Firmware Activation Without Reset: N/A 00:14:56.494 Multiple Update Detection Support: N/A 00:14:56.494 Firmware Update Granularity: No Information Provided 00:14:56.494 Per-Namespace SMART Log: No 00:14:56.494 Asymmetric Namespace Access Log Page: Not Supported 00:14:56.494 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:56.494 Command Effects Log Page: Supported 00:14:56.494 Get Log Page Extended Data: Supported 00:14:56.494 Telemetry Log Pages: Not Supported 00:14:56.494 Persistent Event Log Pages: Not Supported 00:14:56.494 Supported Log Pages Log Page: May Support 00:14:56.494 Commands Supported & Effects Log Page: Not Supported 00:14:56.494 Feature Identifiers & Effects Log Page:May Support 00:14:56.495 NVMe-MI Commands & Effects Log Page: May Support 00:14:56.495 Data Area 4 for Telemetry Log: Not Supported 00:14:56.495 Error Log Page Entries Supported: 128 00:14:56.495 Keep Alive: Supported 00:14:56.495 Keep Alive Granularity: 10000 ms 00:14:56.495 00:14:56.495 NVM Command Set Attributes 00:14:56.495 ========================== 00:14:56.495 Submission Queue Entry Size 00:14:56.495 Max: 64 00:14:56.495 Min: 64 00:14:56.495 Completion Queue Entry Size 00:14:56.495 Max: 16 00:14:56.495 Min: 16 00:14:56.495 Number of Namespaces: 32 00:14:56.495 Compare Command: Supported 00:14:56.495 Write Uncorrectable Command: Not Supported 00:14:56.495 Dataset Management Command: Supported 00:14:56.495 Write Zeroes Command: Supported 00:14:56.495 Set Features Save Field: Not Supported 00:14:56.495 Reservations: Not Supported 00:14:56.495 Timestamp: Not Supported 00:14:56.495 Copy: Supported 00:14:56.495 Volatile Write Cache: Present 00:14:56.495 Atomic Write Unit (Normal): 1 00:14:56.495 Atomic Write Unit (PFail): 1 00:14:56.495 Atomic Compare & Write Unit: 1 00:14:56.495 Fused Compare & Write: Supported 00:14:56.495 Scatter-Gather List 00:14:56.495 SGL Command Set: Supported (Dword aligned) 00:14:56.495 SGL Keyed: Not Supported 00:14:56.495 SGL Bit Bucket Descriptor: Not Supported 00:14:56.495 SGL Metadata Pointer: Not Supported 00:14:56.495 Oversized SGL: Not Supported 00:14:56.495 SGL Metadata Address: Not Supported 00:14:56.495 SGL Offset: Not Supported 00:14:56.495 Transport SGL Data Block: Not Supported 00:14:56.495 Replay Protected Memory Block: Not Supported 00:14:56.495 00:14:56.495 Firmware Slot Information 00:14:56.495 ========================= 00:14:56.495 Active slot: 1 00:14:56.495 Slot 1 Firmware Revision: 24.09 00:14:56.495 00:14:56.495 00:14:56.495 Commands Supported and Effects 00:14:56.495 ============================== 00:14:56.495 Admin Commands 00:14:56.495 -------------- 00:14:56.495 Get Log Page (02h): Supported 00:14:56.495 Identify (06h): Supported 00:14:56.495 Abort (08h): Supported 00:14:56.495 Set Features (09h): Supported 00:14:56.495 Get Features (0Ah): Supported 00:14:56.495 Asynchronous Event Request (0Ch): Supported 00:14:56.495 Keep Alive (18h): Supported 00:14:56.495 I/O Commands 00:14:56.495 ------------ 00:14:56.495 Flush (00h): Supported LBA-Change 00:14:56.495 Write (01h): Supported LBA-Change 00:14:56.495 Read (02h): Supported 00:14:56.495 Compare (05h): Supported 00:14:56.495 Write Zeroes (08h): Supported LBA-Change 00:14:56.495 Dataset Management (09h): Supported LBA-Change 00:14:56.495 Copy (19h): Supported LBA-Change 00:14:56.495 00:14:56.495 Error Log 00:14:56.495 ========= 00:14:56.495 00:14:56.495 Arbitration 00:14:56.495 =========== 00:14:56.495 Arbitration Burst: 1 00:14:56.495 00:14:56.495 Power Management 00:14:56.495 ================ 00:14:56.495 Number of Power States: 1 00:14:56.495 Current Power State: Power State #0 00:14:56.495 Power State #0: 00:14:56.495 Max Power: 0.00 W 00:14:56.495 Non-Operational State: Operational 00:14:56.495 Entry Latency: Not Reported 00:14:56.495 Exit Latency: Not Reported 00:14:56.495 Relative Read Throughput: 0 00:14:56.495 Relative Read Latency: 0 00:14:56.495 Relative Write Throughput: 0 00:14:56.495 Relative Write Latency: 0 00:14:56.495 Idle Power: Not Reported 00:14:56.495 Active Power: Not Reported 00:14:56.495 Non-Operational Permissive Mode: Not Supported 00:14:56.495 00:14:56.495 Health Information 00:14:56.495 ================== 00:14:56.495 Critical Warnings: 00:14:56.495 Available Spare Space: OK 00:14:56.495 Temperature: OK 00:14:56.495 Device Reliability: OK 00:14:56.495 Read Only: No 00:14:56.495 Volatile Memory Backup: OK 00:14:56.495 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:56.495 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:56.495 Available Spare: 0% 00:14:56.495 Available Sp[2024-07-24 20:09:00.037983] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:56.495 [2024-07-24 20:09:00.038006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:56.495 [2024-07-24 20:09:00.038067] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:56.495 [2024-07-24 20:09:00.038092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.495 [2024-07-24 20:09:00.038108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.495 [2024-07-24 20:09:00.038122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.495 [2024-07-24 20:09:00.038135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.495 [2024-07-24 20:09:00.038949] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:56.495 [2024-07-24 20:09:00.038981] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:56.495 [2024-07-24 20:09:00.039951] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:56.495 [2024-07-24 20:09:00.040063] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:56.495 [2024-07-24 20:09:00.040083] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:56.495 [2024-07-24 20:09:00.042442] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:56.495 [2024-07-24 20:09:00.042472] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 2 milliseconds 00:14:56.495 [2024-07-24 20:09:00.042572] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:56.495 [2024-07-24 20:09:00.047444] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:56.495 are Threshold: 0% 00:14:56.495 Life Percentage Used: 0% 00:14:56.495 Data Units Read: 0 00:14:56.495 Data Units Written: 0 00:14:56.495 Host Read Commands: 0 00:14:56.495 Host Write Commands: 0 00:14:56.495 Controller Busy Time: 0 minutes 00:14:56.495 Power Cycles: 0 00:14:56.495 Power On Hours: 0 hours 00:14:56.495 Unsafe Shutdowns: 0 00:14:56.495 Unrecoverable Media Errors: 0 00:14:56.495 Lifetime Error Log Entries: 0 00:14:56.495 Warning Temperature Time: 0 minutes 00:14:56.495 Critical Temperature Time: 0 minutes 00:14:56.495 00:14:56.495 Number of Queues 00:14:56.495 ================ 00:14:56.495 Number of I/O Submission Queues: 127 00:14:56.495 Number of I/O Completion Queues: 127 00:14:56.495 00:14:56.495 Active Namespaces 00:14:56.495 ================= 00:14:56.495 Namespace ID:1 00:14:56.495 Error Recovery Timeout: Unlimited 00:14:56.495 Command Set Identifier: NVM (00h) 00:14:56.495 Deallocate: Supported 00:14:56.495 Deallocated/Unwritten Error: Not Supported 00:14:56.495 Deallocated Read Value: Unknown 00:14:56.495 Deallocate in Write Zeroes: Not Supported 00:14:56.495 Deallocated Guard Field: 0xFFFF 00:14:56.495 Flush: Supported 00:14:56.495 Reservation: Supported 00:14:56.495 Namespace Sharing Capabilities: Multiple Controllers 00:14:56.495 Size (in LBAs): 131072 (0GiB) 00:14:56.495 Capacity (in LBAs): 131072 (0GiB) 00:14:56.495 Utilization (in LBAs): 131072 (0GiB) 00:14:56.495 NGUID: 956A363E87B04EBBB55FA6D1E1135FF6 00:14:56.495 UUID: 956a363e-87b0-4ebb-b55f-a6d1e1135ff6 00:14:56.495 Thin Provisioning: Not Supported 00:14:56.495 Per-NS Atomic Units: Yes 00:14:56.495 Atomic Boundary Size (Normal): 0 00:14:56.495 Atomic Boundary Size (PFail): 0 00:14:56.495 Atomic Boundary Offset: 0 00:14:56.495 Maximum Single Source Range Length: 65535 00:14:56.495 Maximum Copy Length: 65535 00:14:56.495 Maximum Source Range Count: 1 00:14:56.495 NGUID/EUI64 Never Reused: No 00:14:56.495 Namespace Write Protected: No 00:14:56.495 Number of LBA Formats: 1 00:14:56.495 Current LBA Format: LBA Format #00 00:14:56.495 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:56.495 00:14:56.495 20:09:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:56.495 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.753 [2024-07-24 20:09:00.344270] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:02.016 Initializing NVMe Controllers 00:15:02.016 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:02.016 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:02.016 Initialization complete. Launching workers. 00:15:02.016 ======================================================== 00:15:02.016 Latency(us) 00:15:02.016 Device Information : IOPS MiB/s Average min max 00:15:02.016 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 24061.20 93.99 5324.06 1685.55 13046.60 00:15:02.016 ======================================================== 00:15:02.016 Total : 24061.20 93.99 5324.06 1685.55 13046.60 00:15:02.016 00:15:02.016 [2024-07-24 20:09:05.370402] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:02.016 20:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:02.016 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.016 [2024-07-24 20:09:05.648969] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:07.344 Initializing NVMe Controllers 00:15:07.344 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:07.344 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:07.344 Initialization complete. Launching workers. 00:15:07.344 ======================================================== 00:15:07.344 Latency(us) 00:15:07.344 Device Information : IOPS MiB/s Average min max 00:15:07.344 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15971.68 62.39 8019.09 7663.08 15988.93 00:15:07.344 ======================================================== 00:15:07.344 Total : 15971.68 62.39 8019.09 7663.08 15988.93 00:15:07.344 00:15:07.344 [2024-07-24 20:09:10.689797] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:07.344 20:09:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:07.344 EAL: No free 2048 kB hugepages reported on node 1 00:15:07.344 [2024-07-24 20:09:10.997405] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:12.606 [2024-07-24 20:09:16.077901] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:12.606 Initializing NVMe Controllers 00:15:12.606 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:12.606 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:12.606 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:12.606 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:12.606 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:12.606 Initialization complete. Launching workers. 00:15:12.606 Starting thread on core 2 00:15:12.606 Starting thread on core 3 00:15:12.606 Starting thread on core 1 00:15:12.606 20:09:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:12.606 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.864 [2024-07-24 20:09:16.516028] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:17.057 [2024-07-24 20:09:20.209197] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:17.057 Initializing NVMe Controllers 00:15:17.057 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:17.057 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:17.057 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:17.057 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:17.057 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:17.057 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:17.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:17.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:17.057 Initialization complete. Launching workers. 00:15:17.057 Starting thread on core 1 with urgent priority queue 00:15:17.057 Starting thread on core 2 with urgent priority queue 00:15:17.057 Starting thread on core 3 with urgent priority queue 00:15:17.057 Starting thread on core 0 with urgent priority queue 00:15:17.057 SPDK bdev Controller (SPDK1 ) core 0: 3526.00 IO/s 28.36 secs/100000 ios 00:15:17.057 SPDK bdev Controller (SPDK1 ) core 1: 3632.33 IO/s 27.53 secs/100000 ios 00:15:17.057 SPDK bdev Controller (SPDK1 ) core 2: 3712.00 IO/s 26.94 secs/100000 ios 00:15:17.057 SPDK bdev Controller (SPDK1 ) core 3: 3744.33 IO/s 26.71 secs/100000 ios 00:15:17.057 ======================================================== 00:15:17.057 00:15:17.057 20:09:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:17.057 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.057 [2024-07-24 20:09:20.636071] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:17.057 Initializing NVMe Controllers 00:15:17.057 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:17.057 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:17.057 Namespace ID: 1 size: 0GB 00:15:17.057 Initialization complete. 00:15:17.057 INFO: using host memory buffer for IO 00:15:17.057 Hello world! 00:15:17.057 [2024-07-24 20:09:20.670225] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:17.057 20:09:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:17.057 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.314 [2024-07-24 20:09:21.080102] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:18.687 Initializing NVMe Controllers 00:15:18.687 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:18.687 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:18.687 Initialization complete. Launching workers. 00:15:18.687 submit (in ns) avg, min, max = 9367.5, 5001.5, 4006149.6 00:15:18.687 complete (in ns) avg, min, max = 37657.9, 2881.5, 4005680.0 00:15:18.688 00:15:18.688 Submit histogram 00:15:18.688 ================ 00:15:18.688 Range in us Cumulative Count 00:15:18.688 5.001 - 5.025: 0.0417% ( 4) 00:15:18.688 5.025 - 5.049: 0.0938% ( 5) 00:15:18.688 5.049 - 5.073: 0.1876% ( 9) 00:15:18.688 5.073 - 5.096: 0.2605% ( 7) 00:15:18.688 5.096 - 5.120: 0.4064% ( 14) 00:15:18.688 5.120 - 5.144: 0.4272% ( 2) 00:15:18.688 5.144 - 5.167: 0.4793% ( 5) 00:15:18.688 5.167 - 5.191: 0.9274% ( 43) 00:15:18.688 5.191 - 5.215: 2.6258% ( 163) 00:15:18.688 5.215 - 5.239: 5.9602% ( 320) 00:15:18.688 5.239 - 5.262: 11.8579% ( 566) 00:15:18.688 5.262 - 5.286: 19.0059% ( 686) 00:15:18.688 5.286 - 5.310: 25.9664% ( 668) 00:15:18.688 5.310 - 5.333: 31.6036% ( 541) 00:15:18.688 5.333 - 5.357: 34.9484% ( 321) 00:15:18.688 5.357 - 5.381: 36.6052% ( 159) 00:15:18.688 5.381 - 5.404: 38.4287% ( 175) 00:15:18.688 5.404 - 5.428: 41.2212% ( 268) 00:15:18.688 5.428 - 5.452: 44.4201% ( 307) 00:15:18.688 5.452 - 5.476: 48.2651% ( 369) 00:15:18.688 5.476 - 5.499: 51.7349% ( 333) 00:15:18.688 5.499 - 5.523: 54.1523% ( 232) 00:15:18.688 5.523 - 5.547: 55.9237% ( 170) 00:15:18.688 5.547 - 5.570: 57.3825% ( 140) 00:15:18.688 5.570 - 5.594: 58.4766% ( 105) 00:15:18.688 5.594 - 5.618: 59.6124% ( 109) 00:15:18.688 5.618 - 5.641: 60.5502% ( 90) 00:15:18.688 5.641 - 5.665: 61.3317% ( 75) 00:15:18.688 5.665 - 5.689: 61.8527% ( 50) 00:15:18.688 5.689 - 5.713: 62.2486% ( 38) 00:15:18.688 5.713 - 5.736: 63.0822% ( 80) 00:15:18.688 5.736 - 5.760: 66.6979% ( 347) 00:15:18.688 5.760 - 5.784: 74.1169% ( 712) 00:15:18.688 5.784 - 5.807: 80.9420% ( 655) 00:15:18.688 5.807 - 5.831: 90.4970% ( 917) 00:15:18.688 5.831 - 5.855: 95.0505% ( 437) 00:15:18.688 5.855 - 5.879: 95.8320% ( 75) 00:15:18.688 5.879 - 5.902: 96.3739% ( 52) 00:15:18.688 5.902 - 5.926: 96.5823% ( 20) 00:15:18.688 5.926 - 5.950: 96.7281% ( 14) 00:15:18.688 5.950 - 5.973: 96.8219% ( 9) 00:15:18.688 5.973 - 5.997: 96.9053% ( 8) 00:15:18.688 5.997 - 6.021: 97.0199% ( 11) 00:15:18.688 6.021 - 6.044: 97.1449% ( 12) 00:15:18.688 6.044 - 6.068: 97.2596% ( 11) 00:15:18.688 6.068 - 6.116: 97.5305% ( 26) 00:15:18.688 6.116 - 6.163: 97.7285% ( 19) 00:15:18.688 6.163 - 6.210: 97.8639% ( 13) 00:15:18.688 6.210 - 6.258: 97.9577% ( 9) 00:15:18.688 6.258 - 6.305: 98.0619% ( 10) 00:15:18.688 6.305 - 6.353: 98.1348% ( 7) 00:15:18.688 6.353 - 6.400: 98.2078% ( 7) 00:15:18.688 6.400 - 6.447: 98.2807% ( 7) 00:15:18.688 6.447 - 6.495: 98.3328% ( 5) 00:15:18.688 6.495 - 6.542: 98.3849% ( 5) 00:15:18.688 6.542 - 6.590: 98.4266% ( 4) 00:15:18.688 6.590 - 6.637: 98.4579% ( 3) 00:15:18.688 6.637 - 6.684: 98.4891% ( 3) 00:15:18.688 6.684 - 6.732: 98.5204% ( 3) 00:15:18.688 6.732 - 6.779: 98.5412% ( 2) 00:15:18.688 6.779 - 6.827: 98.5829% ( 4) 00:15:18.688 6.827 - 6.874: 98.6037% ( 2) 00:15:18.688 6.874 - 6.921: 98.6350% ( 3) 00:15:18.688 6.921 - 6.969: 98.6558% ( 2) 00:15:18.688 6.969 - 7.016: 98.6767% ( 2) 00:15:18.688 7.016 - 7.064: 98.7079% ( 3) 00:15:18.688 7.064 - 7.111: 98.7183% ( 1) 00:15:18.688 7.111 - 7.159: 98.7392% ( 2) 00:15:18.688 7.206 - 7.253: 98.7496% ( 1) 00:15:18.688 7.301 - 7.348: 98.7600% ( 1) 00:15:18.688 7.348 - 7.396: 98.7809% ( 2) 00:15:18.688 7.680 - 7.727: 98.7913% ( 1) 00:15:18.688 7.870 - 7.917: 98.8017% ( 1) 00:15:18.688 8.676 - 8.723: 98.8121% ( 1) 00:15:18.688 8.913 - 8.960: 98.8225% ( 1) 00:15:18.688 9.244 - 9.292: 98.8330% ( 1) 00:15:18.688 9.292 - 9.339: 98.8434% ( 1) 00:15:18.688 9.529 - 9.576: 98.8538% ( 1) 00:15:18.688 9.576 - 9.624: 98.8642% ( 1) 00:15:18.688 9.671 - 9.719: 98.8955% ( 3) 00:15:18.688 9.766 - 9.813: 98.9163% ( 2) 00:15:18.688 9.861 - 9.908: 98.9267% ( 1) 00:15:18.688 9.908 - 9.956: 98.9372% ( 1) 00:15:18.688 9.956 - 10.003: 98.9580% ( 2) 00:15:18.688 10.003 - 10.050: 98.9997% ( 4) 00:15:18.688 10.050 - 10.098: 99.0101% ( 1) 00:15:18.688 10.098 - 10.145: 99.0309% ( 2) 00:15:18.688 10.145 - 10.193: 99.0518% ( 2) 00:15:18.688 10.193 - 10.240: 99.0726% ( 2) 00:15:18.688 10.240 - 10.287: 99.0830% ( 1) 00:15:18.688 10.287 - 10.335: 99.1143% ( 3) 00:15:18.688 10.335 - 10.382: 99.1351% ( 2) 00:15:18.688 10.382 - 10.430: 99.1456% ( 1) 00:15:18.688 10.430 - 10.477: 99.1872% ( 4) 00:15:18.688 10.572 - 10.619: 99.2081% ( 2) 00:15:18.688 10.667 - 10.714: 99.2498% ( 4) 00:15:18.688 10.761 - 10.809: 99.2602% ( 1) 00:15:18.688 10.856 - 10.904: 99.2706% ( 1) 00:15:18.688 10.904 - 10.951: 99.2810% ( 1) 00:15:18.688 10.999 - 11.046: 99.3019% ( 2) 00:15:18.688 11.046 - 11.093: 99.3123% ( 1) 00:15:18.688 11.093 - 11.141: 99.3331% ( 2) 00:15:18.688 11.141 - 11.188: 99.3435% ( 1) 00:15:18.688 11.330 - 11.378: 99.3644% ( 2) 00:15:18.688 11.473 - 11.520: 99.3748% ( 1) 00:15:18.688 11.615 - 11.662: 99.3956% ( 2) 00:15:18.688 11.757 - 11.804: 99.4061% ( 1) 00:15:18.688 11.804 - 11.852: 99.4269% ( 2) 00:15:18.688 11.899 - 11.947: 99.4477% ( 2) 00:15:18.688 11.947 - 11.994: 99.4582% ( 1) 00:15:18.688 12.089 - 12.136: 99.4790% ( 2) 00:15:18.688 12.136 - 12.231: 99.4894% ( 1) 00:15:18.688 12.231 - 12.326: 99.4998% ( 1) 00:15:18.688 12.610 - 12.705: 99.5103% ( 1) 00:15:18.688 12.705 - 12.800: 99.5207% ( 1) 00:15:18.688 12.800 - 12.895: 99.5415% ( 2) 00:15:18.688 12.895 - 12.990: 99.5519% ( 1) 00:15:18.688 12.990 - 13.084: 99.5624% ( 1) 00:15:18.688 13.179 - 13.274: 99.5936% ( 3) 00:15:18.688 13.274 - 13.369: 99.6145% ( 2) 00:15:18.688 13.369 - 13.464: 99.6249% ( 1) 00:15:18.688 13.559 - 13.653: 99.6353% ( 1) 00:15:18.688 14.033 - 14.127: 99.6457% ( 1) 00:15:18.688 14.317 - 14.412: 99.6561% ( 1) 00:15:18.688 14.791 - 14.886: 99.6666% ( 1) 00:15:18.688 14.981 - 15.076: 99.6770% ( 1) 00:15:18.688 15.265 - 15.360: 99.6874% ( 1) 00:15:18.688 15.455 - 15.550: 99.6978% ( 1) 00:15:18.688 15.550 - 15.644: 99.7082% ( 1) 00:15:18.688 15.644 - 15.739: 99.7187% ( 1) 00:15:18.688 15.834 - 15.929: 99.7291% ( 1) 00:15:18.688 16.119 - 16.213: 99.7499% ( 2) 00:15:18.688 16.213 - 16.308: 99.7603% ( 1) 00:15:18.688 16.308 - 16.403: 99.7916% ( 3) 00:15:18.688 16.498 - 16.593: 99.8020% ( 1) 00:15:18.688 16.782 - 16.877: 99.8124% ( 1) 00:15:18.688 16.877 - 16.972: 99.8229% ( 1) 00:15:18.688 17.636 - 17.730: 99.8333% ( 1) 00:15:18.688 19.816 - 19.911: 99.8541% ( 2) 00:15:18.688 20.290 - 20.385: 99.8645% ( 1) 00:15:18.688 20.480 - 20.575: 99.8750% ( 1) 00:15:18.688 20.954 - 21.049: 99.8854% ( 1) 00:15:18.688 21.144 - 21.239: 99.8958% ( 1) 00:15:18.688 22.281 - 22.376: 99.9062% ( 1) 00:15:18.688 3980.705 - 4004.978: 99.9792% ( 7) 00:15:18.688 4004.978 - 4029.250: 100.0000% ( 2) 00:15:18.688 00:15:18.688 Complete histogram 00:15:18.688 ================== 00:15:18.688 Range in us Cumulative Count 00:15:18.688 2.880 - 2.892: 0.1042% ( 10) 00:15:18.688 2.892 - 2.904: 0.4897% ( 37) 00:15:18.688 2.904 - 2.916: 0.6356% ( 14) 00:15:18.688 2.916 - 2.927: 0.6565% ( 2) 00:15:18.688 2.927 - 2.939: 0.7398% ( 8) 00:15:18.688 2.939 - 2.951: 0.9482% ( 20) 00:15:18.688 2.951 - 2.963: 1.0003% ( 5) 00:15:18.688 2.963 - 2.975: 1.0212% ( 2) 00:15:18.688 2.975 - 2.987: 1.0316% ( 1) 00:15:18.688 2.987 - 2.999: 1.0524% ( 2) 00:15:18.688 2.999 - 3.010: 1.2504% ( 19) 00:15:18.688 3.010 - 3.022: 10.9409% ( 930) 00:15:18.688 3.022 - 3.034: 42.1173% ( 2992) 00:15:18.688 3.034 - 3.058: 62.7384% ( 1979) 00:15:18.688 3.058 - 3.081: 83.6928% ( 2011) 00:15:18.688 3.081 - 3.105: 93.4354% ( 935) 00:15:18.688 3.105 - 3.129: 96.6552% ( 309) 00:15:18.688 3.129 - 3.153: 97.7076% ( 101) 00:15:18.688 3.153 - 3.1[2024-07-24 20:09:22.104923] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:18.688 76: 97.9369% ( 22) 00:15:18.688 3.176 - 3.200: 98.0098% ( 7) 00:15:18.688 3.200 - 3.224: 98.0515% ( 4) 00:15:18.688 3.224 - 3.247: 98.1348% ( 8) 00:15:18.688 3.247 - 3.271: 98.2182% ( 8) 00:15:18.688 3.295 - 3.319: 98.2286% ( 1) 00:15:18.688 3.319 - 3.342: 98.2495% ( 2) 00:15:18.688 3.342 - 3.366: 98.2911% ( 4) 00:15:18.688 3.366 - 3.390: 98.3120% ( 2) 00:15:18.688 3.413 - 3.437: 98.3328% ( 2) 00:15:18.688 3.437 - 3.461: 98.3641% ( 3) 00:15:18.688 3.484 - 3.508: 98.3745% ( 1) 00:15:18.688 3.627 - 3.650: 98.3953% ( 2) 00:15:18.688 3.698 - 3.721: 98.4058% ( 1) 00:15:18.689 3.769 - 3.793: 98.4162% ( 1) 00:15:18.689 3.793 - 3.816: 98.4266% ( 1) 00:15:18.689 3.816 - 3.840: 98.4474% ( 2) 00:15:18.689 3.864 - 3.887: 98.4579% ( 1) 00:15:18.689 3.887 - 3.911: 98.4683% ( 1) 00:15:18.689 4.053 - 4.077: 98.4787% ( 1) 00:15:18.689 4.077 - 4.101: 98.4891% ( 1) 00:15:18.689 4.196 - 4.219: 98.5100% ( 2) 00:15:18.689 4.267 - 4.290: 98.5516% ( 4) 00:15:18.689 4.290 - 4.314: 98.5725% ( 2) 00:15:18.689 4.361 - 4.385: 98.6246% ( 5) 00:15:18.689 4.385 - 4.409: 98.6454% ( 2) 00:15:18.689 4.409 - 4.433: 98.6558% ( 1) 00:15:18.689 4.456 - 4.480: 98.6767% ( 2) 00:15:18.689 4.504 - 4.527: 98.6871% ( 1) 00:15:18.689 4.527 - 4.551: 98.6975% ( 1) 00:15:18.689 4.599 - 4.622: 98.7079% ( 1) 00:15:18.689 4.622 - 4.646: 98.7183% ( 1) 00:15:18.689 4.693 - 4.717: 98.7288% ( 1) 00:15:18.689 4.717 - 4.741: 98.7392% ( 1) 00:15:18.689 4.741 - 4.764: 98.7496% ( 1) 00:15:18.689 4.788 - 4.812: 98.7600% ( 1) 00:15:18.689 4.930 - 4.954: 98.7704% ( 1) 00:15:18.689 5.073 - 5.096: 98.7809% ( 1) 00:15:18.689 5.333 - 5.357: 98.7913% ( 1) 00:15:18.689 5.476 - 5.499: 98.8017% ( 1) 00:15:18.689 6.258 - 6.305: 98.8121% ( 1) 00:15:18.689 6.732 - 6.779: 98.8225% ( 1) 00:15:18.689 7.443 - 7.490: 98.8434% ( 2) 00:15:18.689 7.490 - 7.538: 98.8538% ( 1) 00:15:18.689 7.585 - 7.633: 98.8642% ( 1) 00:15:18.689 7.633 - 7.680: 98.8851% ( 2) 00:15:18.689 7.680 - 7.727: 98.9059% ( 2) 00:15:18.689 7.727 - 7.775: 98.9372% ( 3) 00:15:18.689 7.775 - 7.822: 98.9476% ( 1) 00:15:18.689 7.870 - 7.917: 98.9684% ( 2) 00:15:18.689 8.296 - 8.344: 98.9788% ( 1) 00:15:18.689 8.344 - 8.391: 99.0101% ( 3) 00:15:18.689 8.391 - 8.439: 99.0205% ( 1) 00:15:18.689 8.486 - 8.533: 99.0309% ( 1) 00:15:18.689 8.533 - 8.581: 99.0518% ( 2) 00:15:18.689 8.913 - 8.960: 99.0622% ( 1) 00:15:18.689 9.197 - 9.244: 99.0830% ( 2) 00:15:18.689 9.244 - 9.292: 99.0935% ( 1) 00:15:18.689 9.481 - 9.529: 99.1039% ( 1) 00:15:18.689 9.576 - 9.624: 99.1143% ( 1) 00:15:18.689 11.947 - 11.994: 99.1247% ( 1) 00:15:18.689 22.945 - 23.040: 99.1351% ( 1) 00:15:18.689 3980.705 - 4004.978: 99.9687% ( 80) 00:15:18.689 4004.978 - 4029.250: 100.0000% ( 3) 00:15:18.689 00:15:18.689 20:09:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:18.689 20:09:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:18.689 20:09:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:18.689 20:09:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:18.689 20:09:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:18.689 [ 00:15:18.689 { 00:15:18.689 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:18.689 "subtype": "Discovery", 00:15:18.689 "listen_addresses": [], 00:15:18.689 "allow_any_host": true, 00:15:18.689 "hosts": [] 00:15:18.689 }, 00:15:18.689 { 00:15:18.689 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:18.689 "subtype": "NVMe", 00:15:18.689 "listen_addresses": [ 00:15:18.689 { 00:15:18.689 "trtype": "VFIOUSER", 00:15:18.689 "adrfam": "IPv4", 00:15:18.689 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:18.689 "trsvcid": "0" 00:15:18.689 } 00:15:18.689 ], 00:15:18.689 "allow_any_host": true, 00:15:18.689 "hosts": [], 00:15:18.689 "serial_number": "SPDK1", 00:15:18.689 "model_number": "SPDK bdev Controller", 00:15:18.689 "max_namespaces": 32, 00:15:18.689 "min_cntlid": 1, 00:15:18.689 "max_cntlid": 65519, 00:15:18.689 "namespaces": [ 00:15:18.689 { 00:15:18.689 "nsid": 1, 00:15:18.689 "bdev_name": "Malloc1", 00:15:18.689 "name": "Malloc1", 00:15:18.689 "nguid": "956A363E87B04EBBB55FA6D1E1135FF6", 00:15:18.689 "uuid": "956a363e-87b0-4ebb-b55f-a6d1e1135ff6" 00:15:18.689 } 00:15:18.689 ] 00:15:18.689 }, 00:15:18.689 { 00:15:18.689 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:18.689 "subtype": "NVMe", 00:15:18.689 "listen_addresses": [ 00:15:18.689 { 00:15:18.689 "trtype": "VFIOUSER", 00:15:18.689 "adrfam": "IPv4", 00:15:18.689 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:18.689 "trsvcid": "0" 00:15:18.689 } 00:15:18.689 ], 00:15:18.689 "allow_any_host": true, 00:15:18.689 "hosts": [], 00:15:18.689 "serial_number": "SPDK2", 00:15:18.689 "model_number": "SPDK bdev Controller", 00:15:18.689 "max_namespaces": 32, 00:15:18.689 "min_cntlid": 1, 00:15:18.689 "max_cntlid": 65519, 00:15:18.689 "namespaces": [ 00:15:18.689 { 00:15:18.689 "nsid": 1, 00:15:18.689 "bdev_name": "Malloc2", 00:15:18.689 "name": "Malloc2", 00:15:18.689 "nguid": "20F6E185E1BA4B46BD8A11E6269EC9E5", 00:15:18.689 "uuid": "20f6e185-e1ba-4b46-bd8a-11e6269ec9e5" 00:15:18.689 } 00:15:18.689 ] 00:15:18.689 } 00:15:18.689 ] 00:15:18.946 20:09:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:18.946 20:09:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2030148 00:15:18.946 20:09:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:18.946 20:09:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:18.946 20:09:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:18.946 20:09:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:18.946 20:09:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:18.946 20:09:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:18.946 20:09:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:18.946 20:09:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:18.946 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.946 [2024-07-24 20:09:22.667097] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:19.202 Malloc3 00:15:19.202 20:09:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:19.459 [2024-07-24 20:09:23.185910] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:19.459 20:09:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:19.717 Asynchronous Event Request test 00:15:19.717 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.717 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.717 Registering asynchronous event callbacks... 00:15:19.717 Starting namespace attribute notice tests for all controllers... 00:15:19.717 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:19.717 aer_cb - Changed Namespace 00:15:19.717 Cleaning up... 00:15:20.284 [ 00:15:20.284 { 00:15:20.284 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:20.284 "subtype": "Discovery", 00:15:20.284 "listen_addresses": [], 00:15:20.284 "allow_any_host": true, 00:15:20.284 "hosts": [] 00:15:20.284 }, 00:15:20.284 { 00:15:20.284 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:20.284 "subtype": "NVMe", 00:15:20.284 "listen_addresses": [ 00:15:20.284 { 00:15:20.284 "trtype": "VFIOUSER", 00:15:20.284 "adrfam": "IPv4", 00:15:20.284 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:20.284 "trsvcid": "0" 00:15:20.284 } 00:15:20.284 ], 00:15:20.284 "allow_any_host": true, 00:15:20.284 "hosts": [], 00:15:20.284 "serial_number": "SPDK1", 00:15:20.284 "model_number": "SPDK bdev Controller", 00:15:20.284 "max_namespaces": 32, 00:15:20.284 "min_cntlid": 1, 00:15:20.284 "max_cntlid": 65519, 00:15:20.284 "namespaces": [ 00:15:20.284 { 00:15:20.284 "nsid": 1, 00:15:20.284 "bdev_name": "Malloc1", 00:15:20.284 "name": "Malloc1", 00:15:20.284 "nguid": "956A363E87B04EBBB55FA6D1E1135FF6", 00:15:20.284 "uuid": "956a363e-87b0-4ebb-b55f-a6d1e1135ff6" 00:15:20.284 }, 00:15:20.284 { 00:15:20.284 "nsid": 2, 00:15:20.284 "bdev_name": "Malloc3", 00:15:20.284 "name": "Malloc3", 00:15:20.284 "nguid": "C808C11DEE2D479C8AEF76F87BDF5155", 00:15:20.284 "uuid": "c808c11d-ee2d-479c-8aef-76f87bdf5155" 00:15:20.284 } 00:15:20.284 ] 00:15:20.284 }, 00:15:20.284 { 00:15:20.284 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:20.284 "subtype": "NVMe", 00:15:20.284 "listen_addresses": [ 00:15:20.284 { 00:15:20.284 "trtype": "VFIOUSER", 00:15:20.284 "adrfam": "IPv4", 00:15:20.284 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:20.284 "trsvcid": "0" 00:15:20.284 } 00:15:20.284 ], 00:15:20.284 "allow_any_host": true, 00:15:20.284 "hosts": [], 00:15:20.284 "serial_number": "SPDK2", 00:15:20.284 "model_number": "SPDK bdev Controller", 00:15:20.284 "max_namespaces": 32, 00:15:20.284 "min_cntlid": 1, 00:15:20.284 "max_cntlid": 65519, 00:15:20.284 "namespaces": [ 00:15:20.284 { 00:15:20.284 "nsid": 1, 00:15:20.284 "bdev_name": "Malloc2", 00:15:20.284 "name": "Malloc2", 00:15:20.284 "nguid": "20F6E185E1BA4B46BD8A11E6269EC9E5", 00:15:20.284 "uuid": "20f6e185-e1ba-4b46-bd8a-11e6269ec9e5" 00:15:20.284 } 00:15:20.284 ] 00:15:20.284 } 00:15:20.284 ] 00:15:20.284 20:09:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2030148 00:15:20.284 20:09:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:20.284 20:09:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:20.284 20:09:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:20.284 20:09:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:20.284 [2024-07-24 20:09:23.842596] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:15:20.284 [2024-07-24 20:09:23.842648] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2030286 ] 00:15:20.284 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.284 [2024-07-24 20:09:23.894359] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:20.284 [2024-07-24 20:09:23.900790] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:20.284 [2024-07-24 20:09:23.900831] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8a2203d000 00:15:20.284 [2024-07-24 20:09:23.901788] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:20.284 [2024-07-24 20:09:23.902795] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:20.284 [2024-07-24 20:09:23.903799] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:20.284 [2024-07-24 20:09:23.904813] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:20.284 [2024-07-24 20:09:23.905827] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:20.284 [2024-07-24 20:09:23.906844] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:20.284 [2024-07-24 20:09:23.907847] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:20.284 [2024-07-24 20:09:23.908856] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:20.284 [2024-07-24 20:09:23.909873] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:20.285 [2024-07-24 20:09:23.909902] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8a22032000 00:15:20.285 [2024-07-24 20:09:23.911473] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:20.285 [2024-07-24 20:09:23.933101] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:20.285 [2024-07-24 20:09:23.933151] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:20.285 [2024-07-24 20:09:23.938296] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:20.285 [2024-07-24 20:09:23.938378] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:20.285 [2024-07-24 20:09:23.938520] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:20.285 [2024-07-24 20:09:23.938557] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:20.285 [2024-07-24 20:09:23.938573] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:20.285 [2024-07-24 20:09:23.939298] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:20.285 [2024-07-24 20:09:23.939337] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:20.285 [2024-07-24 20:09:23.939357] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:20.285 [2024-07-24 20:09:23.940308] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:20.285 [2024-07-24 20:09:23.940336] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:20.285 [2024-07-24 20:09:23.940356] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:20.285 [2024-07-24 20:09:23.941314] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:20.285 [2024-07-24 20:09:23.941345] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:20.285 [2024-07-24 20:09:23.942318] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:20.285 [2024-07-24 20:09:23.942348] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:20.285 [2024-07-24 20:09:23.942361] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:20.285 [2024-07-24 20:09:23.942378] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:20.285 [2024-07-24 20:09:23.942492] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:20.285 [2024-07-24 20:09:23.942505] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:20.285 [2024-07-24 20:09:23.942517] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:20.285 [2024-07-24 20:09:23.943327] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:20.285 [2024-07-24 20:09:23.944339] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:20.285 [2024-07-24 20:09:23.945343] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:20.285 [2024-07-24 20:09:23.946336] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:20.285 [2024-07-24 20:09:23.946435] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:20.285 [2024-07-24 20:09:23.947358] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:20.285 [2024-07-24 20:09:23.947386] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:20.285 [2024-07-24 20:09:23.947400] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:20.285 [2024-07-24 20:09:23.947441] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:20.285 [2024-07-24 20:09:23.947466] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:20.285 [2024-07-24 20:09:23.947500] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:20.285 [2024-07-24 20:09:23.947514] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:20.285 [2024-07-24 20:09:23.947524] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:20.285 [2024-07-24 20:09:23.947549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:20.285 [2024-07-24 20:09:23.955447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:20.285 [2024-07-24 20:09:23.955478] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:20.285 [2024-07-24 20:09:23.955490] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:20.285 [2024-07-24 20:09:23.955501] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:20.285 [2024-07-24 20:09:23.955511] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:20.285 [2024-07-24 20:09:23.955523] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:20.285 [2024-07-24 20:09:23.955534] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:20.285 [2024-07-24 20:09:23.955545] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:20.285 [2024-07-24 20:09:23.955563] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:20.285 [2024-07-24 20:09:23.955592] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:20.285 [2024-07-24 20:09:23.963445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:20.285 [2024-07-24 20:09:23.963484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.285 [2024-07-24 20:09:23.963505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.285 [2024-07-24 20:09:23.963522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.285 [2024-07-24 20:09:23.963539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.285 [2024-07-24 20:09:23.963552] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:20.285 [2024-07-24 20:09:23.963573] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:20.285 [2024-07-24 20:09:23.963595] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:20.285 [2024-07-24 20:09:23.971444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:20.285 [2024-07-24 20:09:23.971469] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:20.285 [2024-07-24 20:09:23.971483] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:20.286 [2024-07-24 20:09:23.971511] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:20.286 [2024-07-24 20:09:23.971527] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:20.286 [2024-07-24 20:09:23.971547] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:20.286 [2024-07-24 20:09:23.979444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:20.286 [2024-07-24 20:09:23.979550] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:20.286 [2024-07-24 20:09:23.979574] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:20.286 [2024-07-24 20:09:23.979594] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:20.286 [2024-07-24 20:09:23.979606] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:20.286 [2024-07-24 20:09:23.979615] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:20.286 [2024-07-24 20:09:23.979630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:20.286 [2024-07-24 20:09:23.987447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:20.286 [2024-07-24 20:09:23.987492] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:20.286 [2024-07-24 20:09:23.987521] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:20.286 [2024-07-24 20:09:23.987543] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:20.286 [2024-07-24 20:09:23.987561] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:20.286 [2024-07-24 20:09:23.987573] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:20.286 [2024-07-24 20:09:23.987582] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:20.286 [2024-07-24 20:09:23.987596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:20.286 [2024-07-24 20:09:23.995443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:20.286 [2024-07-24 20:09:23.995493] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:20.286 [2024-07-24 20:09:23.995517] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:20.286 [2024-07-24 20:09:23.995537] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:20.286 [2024-07-24 20:09:23.995548] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:20.286 [2024-07-24 20:09:23.995557] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:20.286 [2024-07-24 20:09:23.995570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:20.286 [2024-07-24 20:09:24.003493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:20.286 [2024-07-24 20:09:24.003531] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:20.286 [2024-07-24 20:09:24.003551] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:20.286 [2024-07-24 20:09:24.003574] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:20.286 [2024-07-24 20:09:24.003593] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:20.286 [2024-07-24 20:09:24.003606] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:20.286 [2024-07-24 20:09:24.003618] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:20.286 [2024-07-24 20:09:24.003630] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:20.286 [2024-07-24 20:09:24.003641] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:20.286 [2024-07-24 20:09:24.003653] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:20.286 [2024-07-24 20:09:24.003688] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:20.286 [2024-07-24 20:09:24.014442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:20.286 [2024-07-24 20:09:24.014490] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:20.286 [2024-07-24 20:09:24.022447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:20.286 [2024-07-24 20:09:24.022484] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:20.286 [2024-07-24 20:09:24.030445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:20.286 [2024-07-24 20:09:24.030480] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:20.286 [2024-07-24 20:09:24.038441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:20.286 [2024-07-24 20:09:24.038499] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:20.286 [2024-07-24 20:09:24.038516] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:20.286 [2024-07-24 20:09:24.038525] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:20.286 [2024-07-24 20:09:24.038534] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:20.286 [2024-07-24 20:09:24.038542] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:20.286 [2024-07-24 20:09:24.038556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:20.286 [2024-07-24 20:09:24.038573] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:20.286 [2024-07-24 20:09:24.038585] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:20.286 [2024-07-24 20:09:24.038594] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:20.286 [2024-07-24 20:09:24.038608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:20.286 [2024-07-24 20:09:24.038630] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:20.286 [2024-07-24 20:09:24.038643] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:20.286 [2024-07-24 20:09:24.038651] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:20.286 [2024-07-24 20:09:24.038664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:20.286 [2024-07-24 20:09:24.038681] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:20.286 [2024-07-24 20:09:24.038693] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:20.286 [2024-07-24 20:09:24.038701] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:20.286 [2024-07-24 20:09:24.038719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:20.286 [2024-07-24 20:09:24.046449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:20.287 [2024-07-24 20:09:24.046500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:20.287 [2024-07-24 20:09:24.046526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:20.287 [2024-07-24 20:09:24.046543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:20.287 ===================================================== 00:15:20.287 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:20.287 ===================================================== 00:15:20.287 Controller Capabilities/Features 00:15:20.287 ================================ 00:15:20.287 Vendor ID: 4e58 00:15:20.287 Subsystem Vendor ID: 4e58 00:15:20.287 Serial Number: SPDK2 00:15:20.287 Model Number: SPDK bdev Controller 00:15:20.287 Firmware Version: 24.09 00:15:20.287 Recommended Arb Burst: 6 00:15:20.287 IEEE OUI Identifier: 8d 6b 50 00:15:20.287 Multi-path I/O 00:15:20.287 May have multiple subsystem ports: Yes 00:15:20.287 May have multiple controllers: Yes 00:15:20.287 Associated with SR-IOV VF: No 00:15:20.287 Max Data Transfer Size: 131072 00:15:20.287 Max Number of Namespaces: 32 00:15:20.287 Max Number of I/O Queues: 127 00:15:20.287 NVMe Specification Version (VS): 1.3 00:15:20.287 NVMe Specification Version (Identify): 1.3 00:15:20.287 Maximum Queue Entries: 256 00:15:20.287 Contiguous Queues Required: Yes 00:15:20.287 Arbitration Mechanisms Supported 00:15:20.287 Weighted Round Robin: Not Supported 00:15:20.287 Vendor Specific: Not Supported 00:15:20.287 Reset Timeout: 15000 ms 00:15:20.287 Doorbell Stride: 4 bytes 00:15:20.287 NVM Subsystem Reset: Not Supported 00:15:20.287 Command Sets Supported 00:15:20.287 NVM Command Set: Supported 00:15:20.287 Boot Partition: Not Supported 00:15:20.287 Memory Page Size Minimum: 4096 bytes 00:15:20.287 Memory Page Size Maximum: 4096 bytes 00:15:20.287 Persistent Memory Region: Not Supported 00:15:20.287 Optional Asynchronous Events Supported 00:15:20.287 Namespace Attribute Notices: Supported 00:15:20.287 Firmware Activation Notices: Not Supported 00:15:20.287 ANA Change Notices: Not Supported 00:15:20.287 PLE Aggregate Log Change Notices: Not Supported 00:15:20.287 LBA Status Info Alert Notices: Not Supported 00:15:20.287 EGE Aggregate Log Change Notices: Not Supported 00:15:20.287 Normal NVM Subsystem Shutdown event: Not Supported 00:15:20.287 Zone Descriptor Change Notices: Not Supported 00:15:20.287 Discovery Log Change Notices: Not Supported 00:15:20.287 Controller Attributes 00:15:20.287 128-bit Host Identifier: Supported 00:15:20.287 Non-Operational Permissive Mode: Not Supported 00:15:20.287 NVM Sets: Not Supported 00:15:20.287 Read Recovery Levels: Not Supported 00:15:20.287 Endurance Groups: Not Supported 00:15:20.287 Predictable Latency Mode: Not Supported 00:15:20.287 Traffic Based Keep ALive: Not Supported 00:15:20.287 Namespace Granularity: Not Supported 00:15:20.287 SQ Associations: Not Supported 00:15:20.287 UUID List: Not Supported 00:15:20.287 Multi-Domain Subsystem: Not Supported 00:15:20.287 Fixed Capacity Management: Not Supported 00:15:20.287 Variable Capacity Management: Not Supported 00:15:20.287 Delete Endurance Group: Not Supported 00:15:20.287 Delete NVM Set: Not Supported 00:15:20.287 Extended LBA Formats Supported: Not Supported 00:15:20.287 Flexible Data Placement Supported: Not Supported 00:15:20.287 00:15:20.287 Controller Memory Buffer Support 00:15:20.287 ================================ 00:15:20.287 Supported: No 00:15:20.287 00:15:20.287 Persistent Memory Region Support 00:15:20.287 ================================ 00:15:20.287 Supported: No 00:15:20.287 00:15:20.287 Admin Command Set Attributes 00:15:20.287 ============================ 00:15:20.287 Security Send/Receive: Not Supported 00:15:20.287 Format NVM: Not Supported 00:15:20.287 Firmware Activate/Download: Not Supported 00:15:20.287 Namespace Management: Not Supported 00:15:20.287 Device Self-Test: Not Supported 00:15:20.287 Directives: Not Supported 00:15:20.287 NVMe-MI: Not Supported 00:15:20.287 Virtualization Management: Not Supported 00:15:20.287 Doorbell Buffer Config: Not Supported 00:15:20.287 Get LBA Status Capability: Not Supported 00:15:20.287 Command & Feature Lockdown Capability: Not Supported 00:15:20.287 Abort Command Limit: 4 00:15:20.287 Async Event Request Limit: 4 00:15:20.287 Number of Firmware Slots: N/A 00:15:20.287 Firmware Slot 1 Read-Only: N/A 00:15:20.287 Firmware Activation Without Reset: N/A 00:15:20.287 Multiple Update Detection Support: N/A 00:15:20.287 Firmware Update Granularity: No Information Provided 00:15:20.287 Per-Namespace SMART Log: No 00:15:20.287 Asymmetric Namespace Access Log Page: Not Supported 00:15:20.287 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:20.287 Command Effects Log Page: Supported 00:15:20.287 Get Log Page Extended Data: Supported 00:15:20.287 Telemetry Log Pages: Not Supported 00:15:20.287 Persistent Event Log Pages: Not Supported 00:15:20.287 Supported Log Pages Log Page: May Support 00:15:20.287 Commands Supported & Effects Log Page: Not Supported 00:15:20.287 Feature Identifiers & Effects Log Page:May Support 00:15:20.287 NVMe-MI Commands & Effects Log Page: May Support 00:15:20.287 Data Area 4 for Telemetry Log: Not Supported 00:15:20.287 Error Log Page Entries Supported: 128 00:15:20.287 Keep Alive: Supported 00:15:20.287 Keep Alive Granularity: 10000 ms 00:15:20.287 00:15:20.287 NVM Command Set Attributes 00:15:20.287 ========================== 00:15:20.287 Submission Queue Entry Size 00:15:20.287 Max: 64 00:15:20.287 Min: 64 00:15:20.287 Completion Queue Entry Size 00:15:20.287 Max: 16 00:15:20.287 Min: 16 00:15:20.287 Number of Namespaces: 32 00:15:20.288 Compare Command: Supported 00:15:20.288 Write Uncorrectable Command: Not Supported 00:15:20.288 Dataset Management Command: Supported 00:15:20.288 Write Zeroes Command: Supported 00:15:20.288 Set Features Save Field: Not Supported 00:15:20.288 Reservations: Not Supported 00:15:20.288 Timestamp: Not Supported 00:15:20.288 Copy: Supported 00:15:20.288 Volatile Write Cache: Present 00:15:20.288 Atomic Write Unit (Normal): 1 00:15:20.288 Atomic Write Unit (PFail): 1 00:15:20.288 Atomic Compare & Write Unit: 1 00:15:20.288 Fused Compare & Write: Supported 00:15:20.288 Scatter-Gather List 00:15:20.288 SGL Command Set: Supported (Dword aligned) 00:15:20.288 SGL Keyed: Not Supported 00:15:20.288 SGL Bit Bucket Descriptor: Not Supported 00:15:20.288 SGL Metadata Pointer: Not Supported 00:15:20.288 Oversized SGL: Not Supported 00:15:20.288 SGL Metadata Address: Not Supported 00:15:20.288 SGL Offset: Not Supported 00:15:20.288 Transport SGL Data Block: Not Supported 00:15:20.288 Replay Protected Memory Block: Not Supported 00:15:20.288 00:15:20.288 Firmware Slot Information 00:15:20.288 ========================= 00:15:20.288 Active slot: 1 00:15:20.288 Slot 1 Firmware Revision: 24.09 00:15:20.288 00:15:20.288 00:15:20.288 Commands Supported and Effects 00:15:20.288 ============================== 00:15:20.288 Admin Commands 00:15:20.288 -------------- 00:15:20.288 Get Log Page (02h): Supported 00:15:20.288 Identify (06h): Supported 00:15:20.288 Abort (08h): Supported 00:15:20.288 Set Features (09h): Supported 00:15:20.288 Get Features (0Ah): Supported 00:15:20.288 Asynchronous Event Request (0Ch): Supported 00:15:20.288 Keep Alive (18h): Supported 00:15:20.288 I/O Commands 00:15:20.288 ------------ 00:15:20.288 Flush (00h): Supported LBA-Change 00:15:20.288 Write (01h): Supported LBA-Change 00:15:20.288 Read (02h): Supported 00:15:20.288 Compare (05h): Supported 00:15:20.288 Write Zeroes (08h): Supported LBA-Change 00:15:20.288 Dataset Management (09h): Supported LBA-Change 00:15:20.288 Copy (19h): Supported LBA-Change 00:15:20.288 00:15:20.288 Error Log 00:15:20.288 ========= 00:15:20.288 00:15:20.288 Arbitration 00:15:20.288 =========== 00:15:20.288 Arbitration Burst: 1 00:15:20.288 00:15:20.288 Power Management 00:15:20.288 ================ 00:15:20.288 Number of Power States: 1 00:15:20.288 Current Power State: Power State #0 00:15:20.288 Power State #0: 00:15:20.288 Max Power: 0.00 W 00:15:20.288 Non-Operational State: Operational 00:15:20.288 Entry Latency: Not Reported 00:15:20.288 Exit Latency: Not Reported 00:15:20.288 Relative Read Throughput: 0 00:15:20.288 Relative Read Latency: 0 00:15:20.288 Relative Write Throughput: 0 00:15:20.288 Relative Write Latency: 0 00:15:20.288 Idle Power: Not Reported 00:15:20.288 Active Power: Not Reported 00:15:20.288 Non-Operational Permissive Mode: Not Supported 00:15:20.288 00:15:20.288 Health Information 00:15:20.288 ================== 00:15:20.288 Critical Warnings: 00:15:20.288 Available Spare Space: OK 00:15:20.288 Temperature: OK 00:15:20.288 Device Reliability: OK 00:15:20.288 Read Only: No 00:15:20.288 Volatile Memory Backup: OK 00:15:20.288 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:20.288 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:20.288 Available Spare: 0% 00:15:20.288 Available Sp[2024-07-24 20:09:24.046721] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:20.288 [2024-07-24 20:09:24.054444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:20.288 [2024-07-24 20:09:24.054518] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:20.288 [2024-07-24 20:09:24.054543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.288 [2024-07-24 20:09:24.054559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.288 [2024-07-24 20:09:24.054574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.288 [2024-07-24 20:09:24.054588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.288 [2024-07-24 20:09:24.054708] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:20.288 [2024-07-24 20:09:24.054739] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:20.288 [2024-07-24 20:09:24.055705] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:20.288 [2024-07-24 20:09:24.055802] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:20.288 [2024-07-24 20:09:24.055823] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:20.288 [2024-07-24 20:09:24.056718] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:20.288 [2024-07-24 20:09:24.056764] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:20.288 [2024-07-24 20:09:24.056838] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:20.288 [2024-07-24 20:09:24.064444] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:20.546 are Threshold: 0% 00:15:20.546 Life Percentage Used: 0% 00:15:20.546 Data Units Read: 0 00:15:20.546 Data Units Written: 0 00:15:20.546 Host Read Commands: 0 00:15:20.546 Host Write Commands: 0 00:15:20.546 Controller Busy Time: 0 minutes 00:15:20.546 Power Cycles: 0 00:15:20.546 Power On Hours: 0 hours 00:15:20.546 Unsafe Shutdowns: 0 00:15:20.546 Unrecoverable Media Errors: 0 00:15:20.546 Lifetime Error Log Entries: 0 00:15:20.546 Warning Temperature Time: 0 minutes 00:15:20.546 Critical Temperature Time: 0 minutes 00:15:20.546 00:15:20.546 Number of Queues 00:15:20.546 ================ 00:15:20.546 Number of I/O Submission Queues: 127 00:15:20.546 Number of I/O Completion Queues: 127 00:15:20.546 00:15:20.546 Active Namespaces 00:15:20.546 ================= 00:15:20.546 Namespace ID:1 00:15:20.546 Error Recovery Timeout: Unlimited 00:15:20.546 Command Set Identifier: NVM (00h) 00:15:20.546 Deallocate: Supported 00:15:20.546 Deallocated/Unwritten Error: Not Supported 00:15:20.546 Deallocated Read Value: Unknown 00:15:20.546 Deallocate in Write Zeroes: Not Supported 00:15:20.546 Deallocated Guard Field: 0xFFFF 00:15:20.546 Flush: Supported 00:15:20.546 Reservation: Supported 00:15:20.546 Namespace Sharing Capabilities: Multiple Controllers 00:15:20.546 Size (in LBAs): 131072 (0GiB) 00:15:20.546 Capacity (in LBAs): 131072 (0GiB) 00:15:20.546 Utilization (in LBAs): 131072 (0GiB) 00:15:20.546 NGUID: 20F6E185E1BA4B46BD8A11E6269EC9E5 00:15:20.546 UUID: 20f6e185-e1ba-4b46-bd8a-11e6269ec9e5 00:15:20.546 Thin Provisioning: Not Supported 00:15:20.546 Per-NS Atomic Units: Yes 00:15:20.546 Atomic Boundary Size (Normal): 0 00:15:20.546 Atomic Boundary Size (PFail): 0 00:15:20.546 Atomic Boundary Offset: 0 00:15:20.546 Maximum Single Source Range Length: 65535 00:15:20.546 Maximum Copy Length: 65535 00:15:20.546 Maximum Source Range Count: 1 00:15:20.546 NGUID/EUI64 Never Reused: No 00:15:20.546 Namespace Write Protected: No 00:15:20.546 Number of LBA Formats: 1 00:15:20.546 Current LBA Format: LBA Format #00 00:15:20.546 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:20.546 00:15:20.546 20:09:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:20.546 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.804 [2024-07-24 20:09:24.397234] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:26.071 Initializing NVMe Controllers 00:15:26.071 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:26.071 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:26.071 Initialization complete. Launching workers. 00:15:26.071 ======================================================== 00:15:26.071 Latency(us) 00:15:26.071 Device Information : IOPS MiB/s Average min max 00:15:26.071 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24065.17 94.00 5318.78 1676.67 9571.51 00:15:26.071 ======================================================== 00:15:26.071 Total : 24065.17 94.00 5318.78 1676.67 9571.51 00:15:26.071 00:15:26.071 [2024-07-24 20:09:29.498890] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:26.071 20:09:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:26.071 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.071 [2024-07-24 20:09:29.791784] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:31.333 Initializing NVMe Controllers 00:15:31.333 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:31.333 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:31.333 Initialization complete. Launching workers. 00:15:31.333 ======================================================== 00:15:31.333 Latency(us) 00:15:31.333 Device Information : IOPS MiB/s Average min max 00:15:31.333 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24095.68 94.12 5312.13 1714.61 10742.73 00:15:31.333 ======================================================== 00:15:31.333 Total : 24095.68 94.12 5312.13 1714.61 10742.73 00:15:31.333 00:15:31.333 [2024-07-24 20:09:34.817534] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:31.333 20:09:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:31.333 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.333 [2024-07-24 20:09:35.089325] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:36.605 [2024-07-24 20:09:40.222923] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:36.605 Initializing NVMe Controllers 00:15:36.605 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:36.605 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:36.605 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:36.605 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:36.605 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:36.605 Initialization complete. Launching workers. 00:15:36.605 Starting thread on core 2 00:15:36.605 Starting thread on core 3 00:15:36.605 Starting thread on core 1 00:15:36.605 20:09:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:36.605 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.863 [2024-07-24 20:09:40.601475] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:40.150 [2024-07-24 20:09:43.677868] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:40.150 Initializing NVMe Controllers 00:15:40.150 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:40.150 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:40.150 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:40.150 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:40.150 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:40.150 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:40.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:40.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:40.150 Initialization complete. Launching workers. 00:15:40.150 Starting thread on core 1 with urgent priority queue 00:15:40.150 Starting thread on core 2 with urgent priority queue 00:15:40.151 Starting thread on core 3 with urgent priority queue 00:15:40.151 Starting thread on core 0 with urgent priority queue 00:15:40.151 SPDK bdev Controller (SPDK2 ) core 0: 3647.33 IO/s 27.42 secs/100000 ios 00:15:40.151 SPDK bdev Controller (SPDK2 ) core 1: 4010.67 IO/s 24.93 secs/100000 ios 00:15:40.151 SPDK bdev Controller (SPDK2 ) core 2: 3914.67 IO/s 25.54 secs/100000 ios 00:15:40.151 SPDK bdev Controller (SPDK2 ) core 3: 4007.67 IO/s 24.95 secs/100000 ios 00:15:40.151 ======================================================== 00:15:40.151 00:15:40.151 20:09:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:40.151 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.408 [2024-07-24 20:09:44.057081] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:40.408 Initializing NVMe Controllers 00:15:40.408 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:40.408 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:40.408 Namespace ID: 1 size: 0GB 00:15:40.408 Initialization complete. 00:15:40.408 INFO: using host memory buffer for IO 00:15:40.408 Hello world! 00:15:40.408 [2024-07-24 20:09:44.068569] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:40.408 20:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:40.408 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.666 [2024-07-24 20:09:44.415415] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:42.040 Initializing NVMe Controllers 00:15:42.040 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:42.040 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:42.040 Initialization complete. Launching workers. 00:15:42.040 submit (in ns) avg, min, max = 10969.3, 4903.7, 4007389.6 00:15:42.040 complete (in ns) avg, min, max = 37336.7, 2872.6, 4019077.0 00:15:42.040 00:15:42.040 Submit histogram 00:15:42.040 ================ 00:15:42.040 Range in us Cumulative Count 00:15:42.040 4.883 - 4.907: 0.0105% ( 1) 00:15:42.040 4.907 - 4.930: 0.0314% ( 2) 00:15:42.040 4.930 - 4.954: 0.1045% ( 7) 00:15:42.040 4.954 - 4.978: 0.2299% ( 12) 00:15:42.040 4.978 - 5.001: 0.3554% ( 12) 00:15:42.040 5.001 - 5.025: 0.4390% ( 8) 00:15:42.040 5.025 - 5.049: 0.4703% ( 3) 00:15:42.040 5.049 - 5.073: 0.4912% ( 2) 00:15:42.040 5.073 - 5.096: 0.5121% ( 2) 00:15:42.040 5.096 - 5.120: 1.1288% ( 59) 00:15:42.040 5.120 - 5.144: 3.9611% ( 271) 00:15:42.040 5.144 - 5.167: 9.2809% ( 509) 00:15:42.040 5.167 - 5.191: 15.8340% ( 627) 00:15:42.040 5.191 - 5.215: 23.1605% ( 701) 00:15:42.040 5.215 - 5.239: 28.6894% ( 529) 00:15:42.040 5.239 - 5.262: 32.5251% ( 367) 00:15:42.040 5.262 - 5.286: 34.6572% ( 204) 00:15:42.040 5.286 - 5.310: 36.2772% ( 155) 00:15:42.040 5.310 - 5.333: 38.5242% ( 215) 00:15:42.040 5.333 - 5.357: 42.0778% ( 340) 00:15:42.040 5.357 - 5.381: 46.1747% ( 392) 00:15:42.040 5.381 - 5.404: 50.4494% ( 409) 00:15:42.040 5.404 - 5.428: 53.2922% ( 272) 00:15:42.040 5.428 - 5.452: 55.2467% ( 187) 00:15:42.040 5.452 - 5.476: 56.4590% ( 116) 00:15:42.040 5.476 - 5.499: 57.5982% ( 109) 00:15:42.040 5.499 - 5.523: 58.6643% ( 102) 00:15:42.040 5.523 - 5.547: 59.7304% ( 102) 00:15:42.040 5.547 - 5.570: 60.5038% ( 74) 00:15:42.040 5.570 - 5.594: 61.1831% ( 65) 00:15:42.040 5.594 - 5.618: 61.4548% ( 26) 00:15:42.040 5.618 - 5.641: 61.6116% ( 15) 00:15:42.040 5.641 - 5.665: 61.8938% ( 27) 00:15:42.040 5.665 - 5.689: 65.1965% ( 316) 00:15:42.040 5.689 - 5.713: 73.1396% ( 760) 00:15:42.040 5.713 - 5.736: 81.3754% ( 788) 00:15:42.040 5.736 - 5.760: 89.4231% ( 770) 00:15:42.040 5.760 - 5.784: 94.1890% ( 456) 00:15:42.040 5.784 - 5.807: 95.3073% ( 107) 00:15:42.040 5.807 - 5.831: 95.6940% ( 37) 00:15:42.040 5.831 - 5.855: 95.9553% ( 25) 00:15:42.040 5.855 - 5.879: 96.1538% ( 19) 00:15:42.040 5.879 - 5.902: 96.2479% ( 9) 00:15:42.040 5.902 - 5.926: 96.3420% ( 9) 00:15:42.040 5.926 - 5.950: 96.4465% ( 10) 00:15:42.040 5.950 - 5.973: 96.5928% ( 14) 00:15:42.040 5.973 - 5.997: 96.9168% ( 31) 00:15:42.040 5.997 - 6.021: 97.0945% ( 17) 00:15:42.040 6.021 - 6.044: 97.2931% ( 19) 00:15:42.040 6.044 - 6.068: 97.3244% ( 3) 00:15:42.040 6.068 - 6.116: 97.4394% ( 11) 00:15:42.040 6.116 - 6.163: 97.5021% ( 6) 00:15:42.040 6.163 - 6.210: 97.5857% ( 8) 00:15:42.040 6.210 - 6.258: 97.6693% ( 8) 00:15:42.040 6.258 - 6.305: 97.7738% ( 10) 00:15:42.040 6.305 - 6.353: 97.8783% ( 10) 00:15:42.040 6.353 - 6.400: 97.9202% ( 4) 00:15:42.040 6.400 - 6.447: 98.0038% ( 8) 00:15:42.040 6.447 - 6.495: 98.1187% ( 11) 00:15:42.040 6.495 - 6.542: 98.1292% ( 1) 00:15:42.040 6.542 - 6.590: 98.1814% ( 5) 00:15:42.040 6.590 - 6.637: 98.2755% ( 9) 00:15:42.040 6.637 - 6.684: 98.3173% ( 4) 00:15:42.040 6.684 - 6.732: 98.4114% ( 9) 00:15:42.040 6.732 - 6.779: 98.4636% ( 5) 00:15:42.040 6.779 - 6.827: 98.5263% ( 6) 00:15:42.040 6.827 - 6.874: 98.5472% ( 2) 00:15:42.040 6.874 - 6.921: 98.5577% ( 1) 00:15:42.040 6.921 - 6.969: 98.5786% ( 2) 00:15:42.040 6.969 - 7.016: 98.6099% ( 3) 00:15:42.040 7.064 - 7.111: 98.6413% ( 3) 00:15:42.040 7.111 - 7.159: 98.6831% ( 4) 00:15:42.040 7.159 - 7.206: 98.7040% ( 2) 00:15:42.040 7.206 - 7.253: 98.7145% ( 1) 00:15:42.040 7.253 - 7.301: 98.7249% ( 1) 00:15:42.040 7.301 - 7.348: 98.7354% ( 1) 00:15:42.040 7.348 - 7.396: 98.7563% ( 2) 00:15:42.040 7.396 - 7.443: 98.7876% ( 3) 00:15:42.040 7.443 - 7.490: 98.8190% ( 3) 00:15:42.040 7.538 - 7.585: 98.8399% ( 2) 00:15:42.040 7.680 - 7.727: 98.8503% ( 1) 00:15:42.040 7.727 - 7.775: 98.8608% ( 1) 00:15:42.040 7.870 - 7.917: 98.8712% ( 1) 00:15:42.040 8.154 - 8.201: 98.8921% ( 2) 00:15:42.040 8.296 - 8.344: 98.9026% ( 1) 00:15:42.040 8.486 - 8.533: 98.9130% ( 1) 00:15:42.040 8.723 - 8.770: 98.9235% ( 1) 00:15:42.040 8.818 - 8.865: 98.9444% ( 2) 00:15:42.040 8.913 - 8.960: 98.9653% ( 2) 00:15:42.040 9.007 - 9.055: 98.9862% ( 2) 00:15:42.040 9.102 - 9.150: 98.9967% ( 1) 00:15:42.040 9.150 - 9.197: 99.0071% ( 1) 00:15:42.040 9.197 - 9.244: 99.0176% ( 1) 00:15:42.040 9.244 - 9.292: 99.0280% ( 1) 00:15:42.040 9.292 - 9.339: 99.0385% ( 1) 00:15:42.040 9.339 - 9.387: 99.0489% ( 1) 00:15:42.040 9.481 - 9.529: 99.0594% ( 1) 00:15:42.040 9.576 - 9.624: 99.0698% ( 1) 00:15:42.040 9.719 - 9.766: 99.0803% ( 1) 00:15:42.040 9.766 - 9.813: 99.1012% ( 2) 00:15:42.040 9.956 - 10.003: 99.1221% ( 2) 00:15:42.040 10.050 - 10.098: 99.1325% ( 1) 00:15:42.040 10.098 - 10.145: 99.1430% ( 1) 00:15:42.041 10.193 - 10.240: 99.1534% ( 1) 00:15:42.041 10.240 - 10.287: 99.1639% ( 1) 00:15:42.041 10.382 - 10.430: 99.1848% ( 2) 00:15:42.041 10.477 - 10.524: 99.1952% ( 1) 00:15:42.041 10.714 - 10.761: 99.2266% ( 3) 00:15:42.041 10.809 - 10.856: 99.2475% ( 2) 00:15:42.041 10.856 - 10.904: 99.2684% ( 2) 00:15:42.041 10.904 - 10.951: 99.2788% ( 1) 00:15:42.041 10.951 - 10.999: 99.2893% ( 1) 00:15:42.041 10.999 - 11.046: 99.2997% ( 1) 00:15:42.041 11.046 - 11.093: 99.3102% ( 1) 00:15:42.041 11.188 - 11.236: 99.3311% ( 2) 00:15:42.041 11.236 - 11.283: 99.3520% ( 2) 00:15:42.041 11.425 - 11.473: 99.3834% ( 3) 00:15:42.041 11.473 - 11.520: 99.4043% ( 2) 00:15:42.041 11.757 - 11.804: 99.4252% ( 2) 00:15:42.041 11.804 - 11.852: 99.4356% ( 1) 00:15:42.041 11.852 - 11.899: 99.4461% ( 1) 00:15:42.041 11.899 - 11.947: 99.4774% ( 3) 00:15:42.041 11.947 - 11.994: 99.4983% ( 2) 00:15:42.041 11.994 - 12.041: 99.5192% ( 2) 00:15:42.041 12.231 - 12.326: 99.5297% ( 1) 00:15:42.041 12.326 - 12.421: 99.5401% ( 1) 00:15:42.041 12.421 - 12.516: 99.5506% ( 1) 00:15:42.041 12.895 - 12.990: 99.5610% ( 1) 00:15:42.041 12.990 - 13.084: 99.5819% ( 2) 00:15:42.041 13.179 - 13.274: 99.5924% ( 1) 00:15:42.041 13.369 - 13.464: 99.6028% ( 1) 00:15:42.041 13.559 - 13.653: 99.6237% ( 2) 00:15:42.041 13.653 - 13.748: 99.6342% ( 1) 00:15:42.041 13.843 - 13.938: 99.6446% ( 1) 00:15:42.041 14.601 - 14.696: 99.6656% ( 2) 00:15:42.041 14.886 - 14.981: 99.6760% ( 1) 00:15:42.041 15.265 - 15.360: 99.6969% ( 2) 00:15:42.041 15.360 - 15.455: 99.7178% ( 2) 00:15:42.041 15.455 - 15.550: 99.7283% ( 1) 00:15:42.041 15.644 - 15.739: 99.7387% ( 1) 00:15:42.041 15.739 - 15.834: 99.7596% ( 2) 00:15:42.041 15.834 - 15.929: 99.7701% ( 1) 00:15:42.041 15.929 - 16.024: 99.7805% ( 1) 00:15:42.041 16.119 - 16.213: 99.7910% ( 1) 00:15:42.041 16.308 - 16.403: 99.8119% ( 2) 00:15:42.041 17.541 - 17.636: 99.8223% ( 1) 00:15:42.041 19.911 - 20.006: 99.8328% ( 1) 00:15:42.041 20.290 - 20.385: 99.8432% ( 1) 00:15:42.041 20.670 - 20.764: 99.8537% ( 1) 00:15:42.041 21.333 - 21.428: 99.8641% ( 1) 00:15:42.041 3980.705 - 4004.978: 99.9791% ( 11) 00:15:42.041 4004.978 - 4029.250: 100.0000% ( 2) 00:15:42.041 00:15:42.041 Complete histogram 00:15:42.041 ================== 00:15:42.041 Range in us Cumulative Count 00:15:42.041 2.868 - 2.880: 0.0732% ( 7) 00:15:42.041 2.880 - 2.892: 0.6271% ( 53) 00:15:42.041 2.892 - 2.904: 0.9093% ( 27) 00:15:42.041 2.904 - 2.916: 0.9824% ( 7) 00:15:42.041 2.916 - 2.927: 1.0556% ( 7) 00:15:42.041 2.927 - 2.939: 1.2019% ( 14) 00:15:42.041 2.939 - 2.951: 1.2960% ( 9) 00:15:42.041 2.951 - 2.963: 1.3169% ( 2) 00:15:42.041 2.963 - 2.975: 1.3273% ( 1) 00:15:42.041 2.975 - 2.987: 1.3587% ( 3) 00:15:42.041 2.987 - 2.999: 1.4005% ( 4) 00:15:42.041 2.999 - 3.010: 7.0548% ( 541) 00:15:42.041 3.010 - 3.0[2024-07-24 20:09:45.514490] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:42.041 22: 42.1300% ( 3356) 00:15:42.041 3.022 - 3.034: 61.9356% ( 1895) 00:15:42.041 3.034 - 3.058: 69.2099% ( 696) 00:15:42.041 3.058 - 3.081: 90.4578% ( 2033) 00:15:42.041 3.081 - 3.105: 95.6104% ( 493) 00:15:42.041 3.105 - 3.129: 97.5962% ( 190) 00:15:42.041 3.129 - 3.153: 97.8574% ( 25) 00:15:42.041 3.153 - 3.176: 97.9933% ( 13) 00:15:42.041 3.176 - 3.200: 98.0560% ( 6) 00:15:42.041 3.200 - 3.224: 98.0665% ( 1) 00:15:42.041 3.224 - 3.247: 98.2023% ( 13) 00:15:42.041 3.247 - 3.271: 98.2546% ( 5) 00:15:42.041 3.271 - 3.295: 98.2964% ( 4) 00:15:42.041 3.295 - 3.319: 98.3173% ( 2) 00:15:42.041 3.319 - 3.342: 98.3382% ( 2) 00:15:42.041 3.390 - 3.413: 98.3487% ( 1) 00:15:42.041 3.413 - 3.437: 98.3696% ( 2) 00:15:42.041 3.437 - 3.461: 98.3800% ( 1) 00:15:42.041 3.484 - 3.508: 98.4009% ( 2) 00:15:42.041 3.532 - 3.556: 98.4114% ( 1) 00:15:42.041 3.556 - 3.579: 98.4427% ( 3) 00:15:42.041 3.650 - 3.674: 98.4636% ( 2) 00:15:42.041 3.769 - 3.793: 98.4741% ( 1) 00:15:42.041 3.816 - 3.840: 98.4845% ( 1) 00:15:42.041 3.959 - 3.982: 98.4950% ( 1) 00:15:42.041 4.053 - 4.077: 98.5159% ( 2) 00:15:42.041 4.077 - 4.101: 98.5368% ( 2) 00:15:42.041 4.124 - 4.148: 98.5577% ( 2) 00:15:42.041 4.148 - 4.172: 98.5786% ( 2) 00:15:42.041 4.172 - 4.196: 98.6099% ( 3) 00:15:42.041 4.196 - 4.219: 98.6204% ( 1) 00:15:42.041 4.267 - 4.290: 98.6309% ( 1) 00:15:42.041 4.290 - 4.314: 98.6518% ( 2) 00:15:42.041 4.314 - 4.338: 98.6622% ( 1) 00:15:42.041 4.338 - 4.361: 98.6831% ( 2) 00:15:42.041 4.361 - 4.385: 98.7040% ( 2) 00:15:42.041 4.385 - 4.409: 98.7249% ( 2) 00:15:42.041 4.433 - 4.456: 98.7354% ( 1) 00:15:42.041 4.527 - 4.551: 98.7458% ( 1) 00:15:42.041 4.575 - 4.599: 98.7563% ( 1) 00:15:42.041 5.167 - 5.191: 98.7667% ( 1) 00:15:42.041 5.262 - 5.286: 98.7772% ( 1) 00:15:42.041 5.404 - 5.428: 98.7876% ( 1) 00:15:42.041 5.476 - 5.499: 98.7981% ( 1) 00:15:42.041 5.713 - 5.736: 98.8085% ( 1) 00:15:42.041 6.495 - 6.542: 98.8190% ( 1) 00:15:42.041 6.779 - 6.827: 98.8399% ( 2) 00:15:42.041 6.969 - 7.016: 98.8503% ( 1) 00:15:42.041 7.016 - 7.064: 98.8608% ( 1) 00:15:42.041 7.064 - 7.111: 98.8712% ( 1) 00:15:42.041 7.206 - 7.253: 98.8817% ( 1) 00:15:42.041 7.348 - 7.396: 98.9026% ( 2) 00:15:42.041 7.490 - 7.538: 98.9130% ( 1) 00:15:42.041 7.633 - 7.680: 98.9235% ( 1) 00:15:42.041 7.680 - 7.727: 98.9339% ( 1) 00:15:42.041 7.727 - 7.775: 98.9444% ( 1) 00:15:42.041 8.012 - 8.059: 98.9548% ( 1) 00:15:42.041 8.059 - 8.107: 98.9653% ( 1) 00:15:42.041 8.533 - 8.581: 98.9758% ( 1) 00:15:42.041 8.581 - 8.628: 98.9967% ( 2) 00:15:42.041 8.818 - 8.865: 99.0071% ( 1) 00:15:42.041 8.960 - 9.007: 99.0176% ( 1) 00:15:42.041 9.529 - 9.576: 99.0280% ( 1) 00:15:42.041 9.861 - 9.908: 99.0385% ( 1) 00:15:42.041 10.050 - 10.098: 99.0489% ( 1) 00:15:42.041 10.809 - 10.856: 99.0594% ( 1) 00:15:42.041 11.141 - 11.188: 99.0698% ( 1) 00:15:42.041 11.994 - 12.041: 99.0803% ( 1) 00:15:42.041 13.748 - 13.843: 99.0907% ( 1) 00:15:42.041 17.351 - 17.446: 99.1116% ( 2) 00:15:42.041 17.636 - 17.730: 99.1221% ( 1) 00:15:42.041 17.920 - 18.015: 99.1325% ( 1) 00:15:42.041 20.764 - 20.859: 99.1430% ( 1) 00:15:42.041 3907.887 - 3932.160: 99.1534% ( 1) 00:15:42.041 3980.705 - 4004.978: 99.9059% ( 72) 00:15:42.041 4004.978 - 4029.250: 100.0000% ( 9) 00:15:42.041 00:15:42.041 20:09:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:42.041 20:09:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:42.041 20:09:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:42.041 20:09:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:42.041 20:09:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:42.300 [ 00:15:42.300 { 00:15:42.300 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:42.300 "subtype": "Discovery", 00:15:42.300 "listen_addresses": [], 00:15:42.300 "allow_any_host": true, 00:15:42.300 "hosts": [] 00:15:42.300 }, 00:15:42.300 { 00:15:42.300 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:42.300 "subtype": "NVMe", 00:15:42.300 "listen_addresses": [ 00:15:42.300 { 00:15:42.300 "trtype": "VFIOUSER", 00:15:42.300 "adrfam": "IPv4", 00:15:42.300 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:42.300 "trsvcid": "0" 00:15:42.300 } 00:15:42.300 ], 00:15:42.300 "allow_any_host": true, 00:15:42.300 "hosts": [], 00:15:42.300 "serial_number": "SPDK1", 00:15:42.300 "model_number": "SPDK bdev Controller", 00:15:42.300 "max_namespaces": 32, 00:15:42.300 "min_cntlid": 1, 00:15:42.300 "max_cntlid": 65519, 00:15:42.300 "namespaces": [ 00:15:42.300 { 00:15:42.300 "nsid": 1, 00:15:42.300 "bdev_name": "Malloc1", 00:15:42.300 "name": "Malloc1", 00:15:42.300 "nguid": "956A363E87B04EBBB55FA6D1E1135FF6", 00:15:42.300 "uuid": "956a363e-87b0-4ebb-b55f-a6d1e1135ff6" 00:15:42.300 }, 00:15:42.300 { 00:15:42.300 "nsid": 2, 00:15:42.300 "bdev_name": "Malloc3", 00:15:42.300 "name": "Malloc3", 00:15:42.300 "nguid": "C808C11DEE2D479C8AEF76F87BDF5155", 00:15:42.300 "uuid": "c808c11d-ee2d-479c-8aef-76f87bdf5155" 00:15:42.300 } 00:15:42.300 ] 00:15:42.300 }, 00:15:42.300 { 00:15:42.300 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:42.300 "subtype": "NVMe", 00:15:42.300 "listen_addresses": [ 00:15:42.300 { 00:15:42.300 "trtype": "VFIOUSER", 00:15:42.300 "adrfam": "IPv4", 00:15:42.300 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:42.300 "trsvcid": "0" 00:15:42.300 } 00:15:42.300 ], 00:15:42.300 "allow_any_host": true, 00:15:42.300 "hosts": [], 00:15:42.300 "serial_number": "SPDK2", 00:15:42.301 "model_number": "SPDK bdev Controller", 00:15:42.301 "max_namespaces": 32, 00:15:42.301 "min_cntlid": 1, 00:15:42.301 "max_cntlid": 65519, 00:15:42.301 "namespaces": [ 00:15:42.301 { 00:15:42.301 "nsid": 1, 00:15:42.301 "bdev_name": "Malloc2", 00:15:42.301 "name": "Malloc2", 00:15:42.301 "nguid": "20F6E185E1BA4B46BD8A11E6269EC9E5", 00:15:42.301 "uuid": "20f6e185-e1ba-4b46-bd8a-11e6269ec9e5" 00:15:42.301 } 00:15:42.301 ] 00:15:42.301 } 00:15:42.301 ] 00:15:42.301 20:09:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:42.301 20:09:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2032800 00:15:42.301 20:09:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:42.301 20:09:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:42.301 20:09:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:42.301 20:09:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:42.301 20:09:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:42.301 20:09:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:42.301 20:09:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:42.301 20:09:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:42.301 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.301 [2024-07-24 20:09:46.068071] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:42.560 Malloc4 00:15:42.560 20:09:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:43.125 [2024-07-24 20:09:46.646327] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:43.125 20:09:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:43.125 Asynchronous Event Request test 00:15:43.125 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.125 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.125 Registering asynchronous event callbacks... 00:15:43.125 Starting namespace attribute notice tests for all controllers... 00:15:43.125 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:43.125 aer_cb - Changed Namespace 00:15:43.125 Cleaning up... 00:15:43.383 [ 00:15:43.383 { 00:15:43.383 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:43.383 "subtype": "Discovery", 00:15:43.383 "listen_addresses": [], 00:15:43.383 "allow_any_host": true, 00:15:43.383 "hosts": [] 00:15:43.383 }, 00:15:43.383 { 00:15:43.383 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:43.383 "subtype": "NVMe", 00:15:43.383 "listen_addresses": [ 00:15:43.383 { 00:15:43.383 "trtype": "VFIOUSER", 00:15:43.383 "adrfam": "IPv4", 00:15:43.383 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:43.383 "trsvcid": "0" 00:15:43.383 } 00:15:43.383 ], 00:15:43.383 "allow_any_host": true, 00:15:43.383 "hosts": [], 00:15:43.383 "serial_number": "SPDK1", 00:15:43.383 "model_number": "SPDK bdev Controller", 00:15:43.383 "max_namespaces": 32, 00:15:43.383 "min_cntlid": 1, 00:15:43.383 "max_cntlid": 65519, 00:15:43.383 "namespaces": [ 00:15:43.383 { 00:15:43.383 "nsid": 1, 00:15:43.383 "bdev_name": "Malloc1", 00:15:43.383 "name": "Malloc1", 00:15:43.383 "nguid": "956A363E87B04EBBB55FA6D1E1135FF6", 00:15:43.383 "uuid": "956a363e-87b0-4ebb-b55f-a6d1e1135ff6" 00:15:43.383 }, 00:15:43.383 { 00:15:43.383 "nsid": 2, 00:15:43.383 "bdev_name": "Malloc3", 00:15:43.383 "name": "Malloc3", 00:15:43.383 "nguid": "C808C11DEE2D479C8AEF76F87BDF5155", 00:15:43.383 "uuid": "c808c11d-ee2d-479c-8aef-76f87bdf5155" 00:15:43.383 } 00:15:43.383 ] 00:15:43.383 }, 00:15:43.383 { 00:15:43.383 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:43.383 "subtype": "NVMe", 00:15:43.383 "listen_addresses": [ 00:15:43.383 { 00:15:43.383 "trtype": "VFIOUSER", 00:15:43.383 "adrfam": "IPv4", 00:15:43.383 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:43.383 "trsvcid": "0" 00:15:43.383 } 00:15:43.383 ], 00:15:43.383 "allow_any_host": true, 00:15:43.383 "hosts": [], 00:15:43.383 "serial_number": "SPDK2", 00:15:43.383 "model_number": "SPDK bdev Controller", 00:15:43.384 "max_namespaces": 32, 00:15:43.384 "min_cntlid": 1, 00:15:43.384 "max_cntlid": 65519, 00:15:43.384 "namespaces": [ 00:15:43.384 { 00:15:43.384 "nsid": 1, 00:15:43.384 "bdev_name": "Malloc2", 00:15:43.384 "name": "Malloc2", 00:15:43.384 "nguid": "20F6E185E1BA4B46BD8A11E6269EC9E5", 00:15:43.384 "uuid": "20f6e185-e1ba-4b46-bd8a-11e6269ec9e5" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nsid": 2, 00:15:43.384 "bdev_name": "Malloc4", 00:15:43.384 "name": "Malloc4", 00:15:43.384 "nguid": "844EABE0DAA746B3A6C7D3F0BDDDF59F", 00:15:43.384 "uuid": "844eabe0-daa7-46b3-a6c7-d3f0bdddf59f" 00:15:43.384 } 00:15:43.384 ] 00:15:43.384 } 00:15:43.384 ] 00:15:43.384 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2032800 00:15:43.384 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:43.384 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2026346 00:15:43.384 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2026346 ']' 00:15:43.384 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2026346 00:15:43.384 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:43.384 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:43.384 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2026346 00:15:43.384 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:43.384 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:43.384 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2026346' 00:15:43.384 killing process with pid 2026346 00:15:43.384 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2026346 00:15:43.384 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2026346 00:15:43.950 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:43.950 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:43.950 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:43.950 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:43.950 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:43.950 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2033065 00:15:43.950 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2033065' 00:15:43.950 Process pid: 2033065 00:15:43.950 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:43.950 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:43.950 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2033065 00:15:43.950 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2033065 ']' 00:15:43.950 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.950 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:43.950 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.950 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:43.950 20:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:43.950 [2024-07-24 20:09:47.678821] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:43.950 [2024-07-24 20:09:47.681469] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:15:43.950 [2024-07-24 20:09:47.681598] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.209 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.209 [2024-07-24 20:09:47.801699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:44.209 [2024-07-24 20:09:47.945341] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.209 [2024-07-24 20:09:47.945417] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.209 [2024-07-24 20:09:47.945446] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:44.209 [2024-07-24 20:09:47.945462] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:44.209 [2024-07-24 20:09:47.945481] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.209 [2024-07-24 20:09:47.945566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.209 [2024-07-24 20:09:47.945601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:44.209 [2024-07-24 20:09:47.945671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:44.209 [2024-07-24 20:09:47.945675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.467 [2024-07-24 20:09:48.075276] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:44.467 [2024-07-24 20:09:48.075569] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:44.467 [2024-07-24 20:09:48.075865] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:44.467 [2024-07-24 20:09:48.076638] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:44.467 [2024-07-24 20:09:48.076953] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:45.399 20:09:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:45.399 20:09:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:45.400 20:09:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:46.333 20:09:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:46.592 20:09:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:46.592 20:09:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:46.592 20:09:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:46.592 20:09:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:46.592 20:09:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:47.161 Malloc1 00:15:47.161 20:09:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:47.419 20:09:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:47.677 20:09:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:47.936 20:09:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:47.936 20:09:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:47.936 20:09:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:48.504 Malloc2 00:15:48.504 20:09:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:49.070 20:09:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:49.672 20:09:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:49.930 20:09:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:49.930 20:09:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2033065 00:15:49.930 20:09:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2033065 ']' 00:15:49.930 20:09:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2033065 00:15:49.930 20:09:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:49.930 20:09:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:50.188 20:09:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2033065 00:15:50.188 20:09:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:50.188 20:09:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:50.188 20:09:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2033065' 00:15:50.188 killing process with pid 2033065 00:15:50.188 20:09:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2033065 00:15:50.188 20:09:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2033065 00:15:50.757 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:50.757 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:50.757 00:15:50.757 real 0m59.734s 00:15:50.757 user 3m55.674s 00:15:50.757 sys 0m6.530s 00:15:50.757 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:50.757 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:50.757 ************************************ 00:15:50.757 END TEST nvmf_vfio_user 00:15:50.757 ************************************ 00:15:50.757 20:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:50.757 20:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:50.757 20:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:50.757 20:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:50.757 ************************************ 00:15:50.757 START TEST nvmf_vfio_user_nvme_compliance 00:15:50.757 ************************************ 00:15:50.757 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:50.757 * Looking for test storage... 00:15:50.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:50.757 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.757 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:50.757 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.757 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.757 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.757 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.757 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.757 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.757 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.757 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2033932 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2033932' 00:15:50.758 Process pid: 2033932 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2033932 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 2033932 ']' 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:50.758 20:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:50.758 [2024-07-24 20:09:54.466525] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:15:50.758 [2024-07-24 20:09:54.466619] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.758 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.017 [2024-07-24 20:09:54.563543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:51.017 [2024-07-24 20:09:54.760544] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.017 [2024-07-24 20:09:54.760658] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.017 [2024-07-24 20:09:54.760693] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.017 [2024-07-24 20:09:54.760722] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.017 [2024-07-24 20:09:54.760747] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.017 [2024-07-24 20:09:54.760875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.017 [2024-07-24 20:09:54.762456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.017 [2024-07-24 20:09:54.762473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.950 20:09:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:51.950 20:09:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:51.950 20:09:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:53.319 20:09:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:53.319 20:09:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:53.319 20:09:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:53.319 20:09:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.319 20:09:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:53.319 20:09:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.319 20:09:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:53.319 20:09:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:53.319 20:09:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.319 20:09:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:53.319 malloc0 00:15:53.319 20:09:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.319 20:09:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:53.319 20:09:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.319 20:09:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:53.319 20:09:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.319 20:09:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:53.319 20:09:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.319 20:09:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:53.319 20:09:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.319 20:09:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:53.319 20:09:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.319 20:09:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:53.319 20:09:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.319 20:09:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:53.319 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.319 00:15:53.319 00:15:53.319 CUnit - A unit testing framework for C - Version 2.1-3 00:15:53.319 http://cunit.sourceforge.net/ 00:15:53.319 00:15:53.319 00:15:53.319 Suite: nvme_compliance 00:15:53.319 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-24 20:09:56.982133] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.319 [2024-07-24 20:09:56.983827] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:53.319 [2024-07-24 20:09:56.983866] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:53.319 [2024-07-24 20:09:56.983895] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:53.319 [2024-07-24 20:09:56.988184] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.319 passed 00:15:53.319 Test: admin_identify_ctrlr_verify_fused ...[2024-07-24 20:09:57.090024] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.319 [2024-07-24 20:09:57.093056] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.577 passed 00:15:53.577 Test: admin_identify_ns ...[2024-07-24 20:09:57.198225] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.577 [2024-07-24 20:09:57.256459] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:53.577 [2024-07-24 20:09:57.264471] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:53.577 [2024-07-24 20:09:57.285628] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.577 passed 00:15:53.835 Test: admin_get_features_mandatory_features ...[2024-07-24 20:09:57.390690] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.835 [2024-07-24 20:09:57.394715] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.835 passed 00:15:53.835 Test: admin_get_features_optional_features ...[2024-07-24 20:09:57.496501] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.835 [2024-07-24 20:09:57.499528] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.835 passed 00:15:53.835 Test: admin_set_features_number_of_queues ...[2024-07-24 20:09:57.602666] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.092 [2024-07-24 20:09:57.709599] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.092 passed 00:15:54.093 Test: admin_get_log_page_mandatory_logs ...[2024-07-24 20:09:57.812512] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.093 [2024-07-24 20:09:57.818601] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.093 passed 00:15:54.350 Test: admin_get_log_page_with_lpo ...[2024-07-24 20:09:57.921295] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.350 [2024-07-24 20:09:57.987499] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:54.350 [2024-07-24 20:09:58.001556] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.350 passed 00:15:54.350 Test: fabric_property_get ...[2024-07-24 20:09:58.103446] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.350 [2024-07-24 20:09:58.104867] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:54.350 [2024-07-24 20:09:58.106483] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.608 passed 00:15:54.608 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-24 20:09:58.210317] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.608 [2024-07-24 20:09:58.211732] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:54.608 [2024-07-24 20:09:58.213342] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.608 passed 00:15:54.608 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-24 20:09:58.316257] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.865 [2024-07-24 20:09:58.399440] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:54.865 [2024-07-24 20:09:58.415442] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:54.865 [2024-07-24 20:09:58.420569] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.865 passed 00:15:54.865 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-24 20:09:58.526363] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.865 [2024-07-24 20:09:58.527781] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:54.865 [2024-07-24 20:09:58.530405] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.865 passed 00:15:54.865 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-24 20:09:58.633229] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.123 [2024-07-24 20:09:58.708446] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:55.123 [2024-07-24 20:09:58.732461] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:55.123 [2024-07-24 20:09:58.737586] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.123 passed 00:15:55.123 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-24 20:09:58.844124] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.123 [2024-07-24 20:09:58.845555] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:55.123 [2024-07-24 20:09:58.845610] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:55.123 [2024-07-24 20:09:58.847159] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.123 passed 00:15:55.380 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-24 20:09:58.957226] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.380 [2024-07-24 20:09:59.048461] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:55.380 [2024-07-24 20:09:59.056465] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:55.380 [2024-07-24 20:09:59.064463] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:55.380 [2024-07-24 20:09:59.072440] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:55.380 [2024-07-24 20:09:59.101572] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.380 passed 00:15:55.638 Test: admin_create_io_sq_verify_pc ...[2024-07-24 20:09:59.207649] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.638 [2024-07-24 20:09:59.234463] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:55.638 [2024-07-24 20:09:59.252530] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.638 passed 00:15:55.638 Test: admin_create_io_qp_max_qps ...[2024-07-24 20:09:59.358320] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:57.009 [2024-07-24 20:10:00.466450] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:57.267 [2024-07-24 20:10:00.855473] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:57.267 passed 00:15:57.267 Test: admin_create_io_sq_shared_cq ...[2024-07-24 20:10:00.962226] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:57.525 [2024-07-24 20:10:01.093445] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:57.525 [2024-07-24 20:10:01.130566] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:57.525 passed 00:15:57.525 00:15:57.525 Run Summary: Type Total Ran Passed Failed Inactive 00:15:57.525 suites 1 1 n/a 0 0 00:15:57.525 tests 18 18 18 0 0 00:15:57.525 asserts 360 360 360 0 n/a 00:15:57.525 00:15:57.525 Elapsed time = 1.766 seconds 00:15:57.525 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2033932 00:15:57.525 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 2033932 ']' 00:15:57.525 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 2033932 00:15:57.525 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:57.525 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:57.525 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2033932 00:15:57.525 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:57.525 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:57.525 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2033932' 00:15:57.525 killing process with pid 2033932 00:15:57.525 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 2033932 00:15:57.525 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 2033932 00:15:58.094 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:58.095 00:15:58.095 real 0m7.349s 00:15:58.095 user 0m20.665s 00:15:58.095 sys 0m0.764s 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:58.095 ************************************ 00:15:58.095 END TEST nvmf_vfio_user_nvme_compliance 00:15:58.095 ************************************ 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:58.095 ************************************ 00:15:58.095 START TEST nvmf_vfio_user_fuzz 00:15:58.095 ************************************ 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:58.095 * Looking for test storage... 00:15:58.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2034789 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2034789' 00:15:58.095 Process pid: 2034789 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2034789 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 2034789 ']' 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:58.095 20:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:58.664 20:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.664 20:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:58.664 20:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:00.042 20:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:00.042 20:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.042 20:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:00.042 20:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.042 20:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:00.042 20:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:00.042 20:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.042 20:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:00.042 malloc0 00:16:00.042 20:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.042 20:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:00.042 20:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.043 20:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:00.043 20:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.043 20:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:00.043 20:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.043 20:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:00.043 20:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.043 20:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:00.043 20:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.043 20:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:00.043 20:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.043 20:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:00.043 20:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:32.152 Fuzzing completed. Shutting down the fuzz application 00:16:32.152 00:16:32.152 Dumping successful admin opcodes: 00:16:32.152 8, 9, 10, 24, 00:16:32.152 Dumping successful io opcodes: 00:16:32.152 0, 00:16:32.152 NS: 0x200003a1ef00 I/O qp, Total commands completed: 445734, total successful commands: 1733, random_seed: 1007112128 00:16:32.152 NS: 0x200003a1ef00 admin qp, Total commands completed: 78375, total successful commands: 606, random_seed: 1573270912 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2034789 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 2034789 ']' 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 2034789 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2034789 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2034789' 00:16:32.152 killing process with pid 2034789 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 2034789 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 2034789 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:32.152 00:16:32.152 real 0m32.997s 00:16:32.152 user 0m31.391s 00:16:32.152 sys 0m27.705s 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:32.152 ************************************ 00:16:32.152 END TEST nvmf_vfio_user_fuzz 00:16:32.152 ************************************ 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:32.152 ************************************ 00:16:32.152 START TEST nvmf_auth_target 00:16:32.152 ************************************ 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:32.152 * Looking for test storage... 00:16:32.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:32.152 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:32.153 20:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.060 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:34.060 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:34.060 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:34.060 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:34.060 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:34.060 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:34.060 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:34.060 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:34.060 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:34.060 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:34.060 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:34.060 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:34.060 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:34.060 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:34.060 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:34.060 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.060 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.060 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:34.061 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:34.061 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:34.061 Found net devices under 0000:84:00.0: cvl_0_0 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:34.061 Found net devices under 0000:84:00.1: cvl_0_1 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:34.061 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:34.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:16:34.322 00:16:34.322 --- 10.0.0.2 ping statistics --- 00:16:34.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.322 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:16:34.322 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:34.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:16:34.322 00:16:34.322 --- 10.0.0.1 ping statistics --- 00:16:34.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.322 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:16:34.322 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.322 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:34.322 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:34.322 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.322 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:34.322 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:34.322 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.322 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:34.322 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:34.322 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:34.322 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:34.322 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:34.322 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.322 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2040272 00:16:34.322 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:34.322 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2040272 00:16:34.322 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2040272 ']' 00:16:34.322 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.322 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:34.322 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.322 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:34.322 20:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2040504 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=48954bc324f5684a80f576ef56e573a323571b6c486905f8 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.9uw 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 48954bc324f5684a80f576ef56e573a323571b6c486905f8 0 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 48954bc324f5684a80f576ef56e573a323571b6c486905f8 0 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=48954bc324f5684a80f576ef56e573a323571b6c486905f8 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.9uw 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.9uw 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.9uw 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5b48d983aa49757dc734073b29d283d3ed0e6d7d938bc8bf8775dac9e5fd2b9c 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.wvf 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5b48d983aa49757dc734073b29d283d3ed0e6d7d938bc8bf8775dac9e5fd2b9c 3 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5b48d983aa49757dc734073b29d283d3ed0e6d7d938bc8bf8775dac9e5fd2b9c 3 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5b48d983aa49757dc734073b29d283d3ed0e6d7d938bc8bf8775dac9e5fd2b9c 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.wvf 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.wvf 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.wvf 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c3bc73037567d7af356052a5da460c20 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.bA2 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c3bc73037567d7af356052a5da460c20 1 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c3bc73037567d7af356052a5da460c20 1 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c3bc73037567d7af356052a5da460c20 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.bA2 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.bA2 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.bA2 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b9f4a2d1f3b9744b7dfdd999952159aff1b503c6485e4452 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.zjk 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b9f4a2d1f3b9744b7dfdd999952159aff1b503c6485e4452 2 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b9f4a2d1f3b9744b7dfdd999952159aff1b503c6485e4452 2 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b9f4a2d1f3b9744b7dfdd999952159aff1b503c6485e4452 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:35.701 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.zjk 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.zjk 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.zjk 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ef4cc0e6fb81aa9e8b37f6859fe4333e15204b42d0a2228f 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.rRd 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ef4cc0e6fb81aa9e8b37f6859fe4333e15204b42d0a2228f 2 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ef4cc0e6fb81aa9e8b37f6859fe4333e15204b42d0a2228f 2 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ef4cc0e6fb81aa9e8b37f6859fe4333e15204b42d0a2228f 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.rRd 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.rRd 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.rRd 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cdc775bace7dea35b2cf1dc3ca5f0454 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.aRd 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cdc775bace7dea35b2cf1dc3ca5f0454 1 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cdc775bace7dea35b2cf1dc3ca5f0454 1 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cdc775bace7dea35b2cf1dc3ca5f0454 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.aRd 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.aRd 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.aRd 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:35.960 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.961 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:35.961 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:35.961 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:36.219 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:36.219 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=629c6f901021f9e8dde10bb820ea64cfb1b1cc9cbfc5f627d3e75861002fb4b1 00:16:36.219 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:36.219 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.zCZ 00:16:36.219 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 629c6f901021f9e8dde10bb820ea64cfb1b1cc9cbfc5f627d3e75861002fb4b1 3 00:16:36.219 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 629c6f901021f9e8dde10bb820ea64cfb1b1cc9cbfc5f627d3e75861002fb4b1 3 00:16:36.219 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:36.219 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:36.219 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=629c6f901021f9e8dde10bb820ea64cfb1b1cc9cbfc5f627d3e75861002fb4b1 00:16:36.219 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:36.219 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:36.219 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.zCZ 00:16:36.219 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.zCZ 00:16:36.219 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.zCZ 00:16:36.219 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:36.219 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2040272 00:16:36.219 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2040272 ']' 00:16:36.220 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.220 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:36.220 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.220 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:36.220 20:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.478 20:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:36.478 20:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:36.478 20:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2040504 /var/tmp/host.sock 00:16:36.478 20:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2040504 ']' 00:16:36.478 20:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:36.478 20:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:36.478 20:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:36.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:36.478 20:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:36.478 20:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.043 20:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:37.043 20:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:37.043 20:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:37.043 20:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.043 20:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.043 20:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.043 20:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:37.043 20:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.9uw 00:16:37.043 20:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.043 20:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.043 20:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.043 20:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.9uw 00:16:37.043 20:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.9uw 00:16:37.609 20:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.wvf ]] 00:16:37.609 20:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wvf 00:16:37.609 20:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.609 20:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.609 20:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.609 20:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wvf 00:16:37.609 20:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wvf 00:16:37.867 20:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:37.867 20:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.bA2 00:16:37.867 20:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.867 20:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.125 20:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.125 20:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.bA2 00:16:38.125 20:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.bA2 00:16:38.383 20:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.zjk ]] 00:16:38.383 20:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zjk 00:16:38.383 20:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.383 20:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.383 20:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.383 20:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zjk 00:16:38.383 20:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zjk 00:16:38.641 20:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:38.641 20:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.rRd 00:16:38.641 20:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.641 20:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.641 20:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.641 20:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.rRd 00:16:38.641 20:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.rRd 00:16:38.899 20:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.aRd ]] 00:16:38.899 20:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.aRd 00:16:38.899 20:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.899 20:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.899 20:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.899 20:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.aRd 00:16:38.899 20:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.aRd 00:16:39.465 20:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:39.465 20:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.zCZ 00:16:39.465 20:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.465 20:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.465 20:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.465 20:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.zCZ 00:16:39.465 20:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.zCZ 00:16:40.031 20:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:40.031 20:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:40.031 20:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.031 20:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.031 20:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:40.031 20:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:40.597 20:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:40.597 20:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.597 20:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:40.597 20:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:40.597 20:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:40.597 20:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.597 20:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.597 20:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.597 20:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.597 20:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.597 20:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.597 20:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.854 00:16:40.854 20:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.854 20:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.854 20:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.420 20:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.420 20:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.420 20:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.420 20:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.420 20:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.420 20:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.420 { 00:16:41.420 "cntlid": 1, 00:16:41.420 "qid": 0, 00:16:41.420 "state": "enabled", 00:16:41.420 "thread": "nvmf_tgt_poll_group_000", 00:16:41.420 "listen_address": { 00:16:41.420 "trtype": "TCP", 00:16:41.420 "adrfam": "IPv4", 00:16:41.420 "traddr": "10.0.0.2", 00:16:41.420 "trsvcid": "4420" 00:16:41.420 }, 00:16:41.420 "peer_address": { 00:16:41.420 "trtype": "TCP", 00:16:41.420 "adrfam": "IPv4", 00:16:41.420 "traddr": "10.0.0.1", 00:16:41.420 "trsvcid": "51582" 00:16:41.420 }, 00:16:41.420 "auth": { 00:16:41.420 "state": "completed", 00:16:41.420 "digest": "sha256", 00:16:41.420 "dhgroup": "null" 00:16:41.420 } 00:16:41.420 } 00:16:41.420 ]' 00:16:41.420 20:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.420 20:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.420 20:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.420 20:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:41.420 20:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.678 20:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.678 20:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.678 20:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.935 20:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NDg5NTRiYzMyNGY1Njg0YTgwZjU3NmVmNTZlNTczYTMyMzU3MWI2YzQ4NjkwNWY4Na3Gew==: --dhchap-ctrl-secret DHHC-1:03:NWI0OGQ5ODNhYTQ5NzU3ZGM3MzQwNzNiMjlkMjgzZDNlZDBlNmQ3ZDkzOGJjOGJmODc3NWRhYzllNWZkMmI5Y9IF2OU=: 00:16:43.308 20:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.308 20:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:43.308 20:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.308 20:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.308 20:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.308 20:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.308 20:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:43.308 20:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:43.874 20:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:43.874 20:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.874 20:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:43.874 20:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:43.874 20:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:43.874 20:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.874 20:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.874 20:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.874 20:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.874 20:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.874 20:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.874 20:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.132 00:16:44.132 20:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.132 20:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.132 20:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.391 20:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.391 20:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.391 20:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.391 20:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.391 20:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.391 20:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.391 { 00:16:44.391 "cntlid": 3, 00:16:44.391 "qid": 0, 00:16:44.391 "state": "enabled", 00:16:44.391 "thread": "nvmf_tgt_poll_group_000", 00:16:44.391 "listen_address": { 00:16:44.391 "trtype": "TCP", 00:16:44.391 "adrfam": "IPv4", 00:16:44.391 "traddr": "10.0.0.2", 00:16:44.391 "trsvcid": "4420" 00:16:44.391 }, 00:16:44.391 "peer_address": { 00:16:44.391 "trtype": "TCP", 00:16:44.391 "adrfam": "IPv4", 00:16:44.391 "traddr": "10.0.0.1", 00:16:44.391 "trsvcid": "51604" 00:16:44.391 }, 00:16:44.391 "auth": { 00:16:44.391 "state": "completed", 00:16:44.391 "digest": "sha256", 00:16:44.391 "dhgroup": "null" 00:16:44.391 } 00:16:44.391 } 00:16:44.391 ]' 00:16:44.391 20:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.391 20:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.391 20:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.391 20:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:44.391 20:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.649 20:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.649 20:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.649 20:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.239 20:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YzNiYzczMDM3NTY3ZDdhZjM1NjA1MmE1ZGE0NjBjMjBjrVcB: --dhchap-ctrl-secret DHHC-1:02:YjlmNGEyZDFmM2I5NzQ0YjdkZmRkOTk5OTUyMTU5YWZmMWI1MDNjNjQ4NWU0NDUyM69n7Q==: 00:16:46.173 20:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.173 20:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:46.173 20:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.173 20:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.173 20:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.173 20:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.173 20:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:46.173 20:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:46.739 20:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:46.739 20:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.739 20:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:46.739 20:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:46.739 20:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:46.739 20:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.739 20:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.739 20:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.739 20:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.739 20:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.739 20:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.739 20:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.997 00:16:46.997 20:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.997 20:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.997 20:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.563 20:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.563 20:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.563 20:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.563 20:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.563 20:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.563 20:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.563 { 00:16:47.563 "cntlid": 5, 00:16:47.563 "qid": 0, 00:16:47.563 "state": "enabled", 00:16:47.563 "thread": "nvmf_tgt_poll_group_000", 00:16:47.563 "listen_address": { 00:16:47.563 "trtype": "TCP", 00:16:47.563 "adrfam": "IPv4", 00:16:47.563 "traddr": "10.0.0.2", 00:16:47.563 "trsvcid": "4420" 00:16:47.563 }, 00:16:47.563 "peer_address": { 00:16:47.563 "trtype": "TCP", 00:16:47.563 "adrfam": "IPv4", 00:16:47.563 "traddr": "10.0.0.1", 00:16:47.563 "trsvcid": "37336" 00:16:47.563 }, 00:16:47.563 "auth": { 00:16:47.563 "state": "completed", 00:16:47.563 "digest": "sha256", 00:16:47.563 "dhgroup": "null" 00:16:47.563 } 00:16:47.563 } 00:16:47.563 ]' 00:16:47.563 20:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.563 20:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.563 20:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.563 20:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:47.563 20:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.821 20:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.821 20:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.821 20:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.388 20:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZWY0Y2MwZTZmYjgxYWE5ZThiMzdmNjg1OWZlNDMzM2UxNTIwNGI0MmQwYTIyMjhm84729A==: --dhchap-ctrl-secret DHHC-1:01:Y2RjNzc1YmFjZTdkZWEzNWIyY2YxZGMzY2E1ZjA0NTQg28rl: 00:16:49.771 20:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.771 20:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:49.771 20:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.771 20:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.771 20:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.771 20:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.771 20:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:49.771 20:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:49.771 20:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:49.771 20:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.771 20:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:49.771 20:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:49.771 20:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:49.771 20:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.771 20:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:49.771 20:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.771 20:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.771 20:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.771 20:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.771 20:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:50.341 00:16:50.341 20:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.341 20:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.341 20:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.908 20:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.908 20:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.908 20:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.908 20:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.908 20:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.908 20:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.908 { 00:16:50.908 "cntlid": 7, 00:16:50.908 "qid": 0, 00:16:50.908 "state": "enabled", 00:16:50.908 "thread": "nvmf_tgt_poll_group_000", 00:16:50.908 "listen_address": { 00:16:50.908 "trtype": "TCP", 00:16:50.908 "adrfam": "IPv4", 00:16:50.908 "traddr": "10.0.0.2", 00:16:50.908 "trsvcid": "4420" 00:16:50.908 }, 00:16:50.908 "peer_address": { 00:16:50.908 "trtype": "TCP", 00:16:50.908 "adrfam": "IPv4", 00:16:50.908 "traddr": "10.0.0.1", 00:16:50.908 "trsvcid": "37374" 00:16:50.908 }, 00:16:50.908 "auth": { 00:16:50.908 "state": "completed", 00:16:50.908 "digest": "sha256", 00:16:50.908 "dhgroup": "null" 00:16:50.908 } 00:16:50.908 } 00:16:50.908 ]' 00:16:50.908 20:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.908 20:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.908 20:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.908 20:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:50.908 20:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.166 20:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.167 20:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.167 20:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.734 20:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NjI5YzZmOTAxMDIxZjllOGRkZTEwYmI4MjBlYTY0Y2ZiMWIxY2M5Y2JmYzVmNjI3ZDNlNzU4NjEwMDJmYjRiMVpBpJQ=: 00:16:53.110 20:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.110 20:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:53.110 20:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.110 20:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.110 20:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.110 20:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.110 20:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.110 20:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:53.110 20:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:53.110 20:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:53.110 20:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.110 20:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:53.110 20:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:53.110 20:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:53.110 20:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.110 20:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.110 20:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.110 20:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.110 20:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.110 20:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.110 20:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.678 00:16:53.678 20:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.678 20:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.678 20:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.936 20:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.936 20:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.936 20:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.936 20:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.936 20:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.936 20:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.936 { 00:16:53.936 "cntlid": 9, 00:16:53.936 "qid": 0, 00:16:53.936 "state": "enabled", 00:16:53.936 "thread": "nvmf_tgt_poll_group_000", 00:16:53.936 "listen_address": { 00:16:53.936 "trtype": "TCP", 00:16:53.936 "adrfam": "IPv4", 00:16:53.936 "traddr": "10.0.0.2", 00:16:53.936 "trsvcid": "4420" 00:16:53.936 }, 00:16:53.936 "peer_address": { 00:16:53.936 "trtype": "TCP", 00:16:53.936 "adrfam": "IPv4", 00:16:53.936 "traddr": "10.0.0.1", 00:16:53.936 "trsvcid": "37390" 00:16:53.936 }, 00:16:53.936 "auth": { 00:16:53.936 "state": "completed", 00:16:53.936 "digest": "sha256", 00:16:53.936 "dhgroup": "ffdhe2048" 00:16:53.936 } 00:16:53.936 } 00:16:53.936 ]' 00:16:53.936 20:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.936 20:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.936 20:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.195 20:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:54.195 20:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.195 20:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.195 20:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.195 20:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.454 20:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NDg5NTRiYzMyNGY1Njg0YTgwZjU3NmVmNTZlNTczYTMyMzU3MWI2YzQ4NjkwNWY4Na3Gew==: --dhchap-ctrl-secret DHHC-1:03:NWI0OGQ5ODNhYTQ5NzU3ZGM3MzQwNzNiMjlkMjgzZDNlZDBlNmQ3ZDkzOGJjOGJmODc3NWRhYzllNWZkMmI5Y9IF2OU=: 00:16:55.829 20:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.829 20:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:55.829 20:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.829 20:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.829 20:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.829 20:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.829 20:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:55.829 20:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:56.087 20:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:56.087 20:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.087 20:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:56.087 20:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:56.087 20:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:56.087 20:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.087 20:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.087 20:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.087 20:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.087 20:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.087 20:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.087 20:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.654 00:16:56.654 20:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.654 20:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.654 20:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.911 20:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.911 20:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.911 20:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.911 20:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.912 20:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.912 20:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.912 { 00:16:56.912 "cntlid": 11, 00:16:56.912 "qid": 0, 00:16:56.912 "state": "enabled", 00:16:56.912 "thread": "nvmf_tgt_poll_group_000", 00:16:56.912 "listen_address": { 00:16:56.912 "trtype": "TCP", 00:16:56.912 "adrfam": "IPv4", 00:16:56.912 "traddr": "10.0.0.2", 00:16:56.912 "trsvcid": "4420" 00:16:56.912 }, 00:16:56.912 "peer_address": { 00:16:56.912 "trtype": "TCP", 00:16:56.912 "adrfam": "IPv4", 00:16:56.912 "traddr": "10.0.0.1", 00:16:56.912 "trsvcid": "46506" 00:16:56.912 }, 00:16:56.912 "auth": { 00:16:56.912 "state": "completed", 00:16:56.912 "digest": "sha256", 00:16:56.912 "dhgroup": "ffdhe2048" 00:16:56.912 } 00:16:56.912 } 00:16:56.912 ]' 00:16:56.912 20:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.912 20:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.912 20:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.912 20:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:56.912 20:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.168 20:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.168 20:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.168 20:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.425 20:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YzNiYzczMDM3NTY3ZDdhZjM1NjA1MmE1ZGE0NjBjMjBjrVcB: --dhchap-ctrl-secret DHHC-1:02:YjlmNGEyZDFmM2I5NzQ0YjdkZmRkOTk5OTUyMTU5YWZmMWI1MDNjNjQ4NWU0NDUyM69n7Q==: 00:16:58.799 20:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.799 20:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:58.799 20:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.799 20:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.799 20:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.799 20:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.799 20:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:58.799 20:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:59.365 20:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:59.365 20:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.365 20:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:59.365 20:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:59.365 20:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:59.365 20:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.365 20:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.365 20:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.365 20:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.365 20:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.365 20:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.365 20:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.936 00:16:59.936 20:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.936 20:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.936 20:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.193 20:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.193 20:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.193 20:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.193 20:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.193 20:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.193 20:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.193 { 00:17:00.193 "cntlid": 13, 00:17:00.193 "qid": 0, 00:17:00.193 "state": "enabled", 00:17:00.193 "thread": "nvmf_tgt_poll_group_000", 00:17:00.193 "listen_address": { 00:17:00.193 "trtype": "TCP", 00:17:00.193 "adrfam": "IPv4", 00:17:00.193 "traddr": "10.0.0.2", 00:17:00.193 "trsvcid": "4420" 00:17:00.193 }, 00:17:00.193 "peer_address": { 00:17:00.193 "trtype": "TCP", 00:17:00.193 "adrfam": "IPv4", 00:17:00.193 "traddr": "10.0.0.1", 00:17:00.193 "trsvcid": "46534" 00:17:00.193 }, 00:17:00.193 "auth": { 00:17:00.193 "state": "completed", 00:17:00.193 "digest": "sha256", 00:17:00.193 "dhgroup": "ffdhe2048" 00:17:00.193 } 00:17:00.193 } 00:17:00.194 ]' 00:17:00.194 20:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.194 20:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.194 20:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.451 20:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:00.451 20:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.451 20:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.451 20:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.451 20:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.708 20:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZWY0Y2MwZTZmYjgxYWE5ZThiMzdmNjg1OWZlNDMzM2UxNTIwNGI0MmQwYTIyMjhm84729A==: --dhchap-ctrl-secret DHHC-1:01:Y2RjNzc1YmFjZTdkZWEzNWIyY2YxZGMzY2E1ZjA0NTQg28rl: 00:17:02.081 20:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.081 20:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:02.081 20:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.081 20:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.081 20:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.081 20:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.081 20:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:02.081 20:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:02.647 20:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:02.647 20:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.647 20:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:02.647 20:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:02.647 20:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:02.647 20:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.647 20:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:02.647 20:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.647 20:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.647 20:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.647 20:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:02.647 20:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:02.905 00:17:02.905 20:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.905 20:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.905 20:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.471 20:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.471 20:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.471 20:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.471 20:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.471 20:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.471 20:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.471 { 00:17:03.471 "cntlid": 15, 00:17:03.471 "qid": 0, 00:17:03.471 "state": "enabled", 00:17:03.471 "thread": "nvmf_tgt_poll_group_000", 00:17:03.471 "listen_address": { 00:17:03.471 "trtype": "TCP", 00:17:03.471 "adrfam": "IPv4", 00:17:03.471 "traddr": "10.0.0.2", 00:17:03.471 "trsvcid": "4420" 00:17:03.471 }, 00:17:03.471 "peer_address": { 00:17:03.471 "trtype": "TCP", 00:17:03.471 "adrfam": "IPv4", 00:17:03.471 "traddr": "10.0.0.1", 00:17:03.471 "trsvcid": "46564" 00:17:03.471 }, 00:17:03.471 "auth": { 00:17:03.471 "state": "completed", 00:17:03.471 "digest": "sha256", 00:17:03.471 "dhgroup": "ffdhe2048" 00:17:03.471 } 00:17:03.471 } 00:17:03.471 ]' 00:17:03.471 20:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.471 20:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.471 20:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.471 20:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:03.471 20:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.471 20:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.471 20:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.471 20:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.038 20:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NjI5YzZmOTAxMDIxZjllOGRkZTEwYmI4MjBlYTY0Y2ZiMWIxY2M5Y2JmYzVmNjI3ZDNlNzU4NjEwMDJmYjRiMVpBpJQ=: 00:17:04.971 20:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.971 20:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:04.971 20:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.971 20:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.971 20:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.971 20:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.971 20:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.971 20:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:04.971 20:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:05.536 20:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:05.536 20:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.536 20:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:05.536 20:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:05.536 20:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:05.536 20:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.536 20:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.536 20:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.536 20:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.536 20:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.536 20:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.536 20:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.102 00:17:06.102 20:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.102 20:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.102 20:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.360 20:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.360 20:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.360 20:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.360 20:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.361 20:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.361 20:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.361 { 00:17:06.361 "cntlid": 17, 00:17:06.361 "qid": 0, 00:17:06.361 "state": "enabled", 00:17:06.361 "thread": "nvmf_tgt_poll_group_000", 00:17:06.361 "listen_address": { 00:17:06.361 "trtype": "TCP", 00:17:06.361 "adrfam": "IPv4", 00:17:06.361 "traddr": "10.0.0.2", 00:17:06.361 "trsvcid": "4420" 00:17:06.361 }, 00:17:06.361 "peer_address": { 00:17:06.361 "trtype": "TCP", 00:17:06.361 "adrfam": "IPv4", 00:17:06.361 "traddr": "10.0.0.1", 00:17:06.361 "trsvcid": "48618" 00:17:06.361 }, 00:17:06.361 "auth": { 00:17:06.361 "state": "completed", 00:17:06.361 "digest": "sha256", 00:17:06.361 "dhgroup": "ffdhe3072" 00:17:06.361 } 00:17:06.361 } 00:17:06.361 ]' 00:17:06.361 20:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.361 20:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.361 20:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.361 20:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:06.361 20:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.618 20:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.618 20:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.618 20:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.184 20:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NDg5NTRiYzMyNGY1Njg0YTgwZjU3NmVmNTZlNTczYTMyMzU3MWI2YzQ4NjkwNWY4Na3Gew==: --dhchap-ctrl-secret DHHC-1:03:NWI0OGQ5ODNhYTQ5NzU3ZGM3MzQwNzNiMjlkMjgzZDNlZDBlNmQ3ZDkzOGJjOGJmODc3NWRhYzllNWZkMmI5Y9IF2OU=: 00:17:08.558 20:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.558 20:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:08.558 20:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.558 20:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.558 20:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.558 20:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.558 20:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:08.558 20:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:08.817 20:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:08.817 20:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.817 20:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:08.817 20:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:08.817 20:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:08.817 20:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.817 20:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.817 20:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.817 20:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.817 20:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.817 20:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.817 20:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.383 00:17:09.383 20:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.383 20:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.383 20:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.640 20:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.640 20:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.640 20:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.640 20:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.640 20:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.640 20:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.640 { 00:17:09.640 "cntlid": 19, 00:17:09.640 "qid": 0, 00:17:09.640 "state": "enabled", 00:17:09.640 "thread": "nvmf_tgt_poll_group_000", 00:17:09.640 "listen_address": { 00:17:09.640 "trtype": "TCP", 00:17:09.640 "adrfam": "IPv4", 00:17:09.640 "traddr": "10.0.0.2", 00:17:09.640 "trsvcid": "4420" 00:17:09.640 }, 00:17:09.640 "peer_address": { 00:17:09.640 "trtype": "TCP", 00:17:09.640 "adrfam": "IPv4", 00:17:09.640 "traddr": "10.0.0.1", 00:17:09.640 "trsvcid": "48658" 00:17:09.640 }, 00:17:09.640 "auth": { 00:17:09.640 "state": "completed", 00:17:09.640 "digest": "sha256", 00:17:09.640 "dhgroup": "ffdhe3072" 00:17:09.640 } 00:17:09.640 } 00:17:09.640 ]' 00:17:09.640 20:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.640 20:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:09.640 20:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.640 20:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:09.640 20:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.898 20:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.898 20:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.898 20:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.156 20:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YzNiYzczMDM3NTY3ZDdhZjM1NjA1MmE1ZGE0NjBjMjBjrVcB: --dhchap-ctrl-secret DHHC-1:02:YjlmNGEyZDFmM2I5NzQ0YjdkZmRkOTk5OTUyMTU5YWZmMWI1MDNjNjQ4NWU0NDUyM69n7Q==: 00:17:11.529 20:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.529 20:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:11.529 20:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.529 20:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.529 20:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.529 20:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.529 20:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:11.529 20:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:11.529 20:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:11.529 20:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.529 20:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:11.529 20:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:11.529 20:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:11.529 20:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.529 20:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.529 20:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.529 20:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.529 20:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.529 20:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.529 20:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.094 00:17:12.094 20:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.094 20:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.094 20:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.352 20:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.352 20:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.352 20:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.352 20:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.352 20:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.610 20:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.610 { 00:17:12.610 "cntlid": 21, 00:17:12.610 "qid": 0, 00:17:12.610 "state": "enabled", 00:17:12.610 "thread": "nvmf_tgt_poll_group_000", 00:17:12.610 "listen_address": { 00:17:12.610 "trtype": "TCP", 00:17:12.610 "adrfam": "IPv4", 00:17:12.611 "traddr": "10.0.0.2", 00:17:12.611 "trsvcid": "4420" 00:17:12.611 }, 00:17:12.611 "peer_address": { 00:17:12.611 "trtype": "TCP", 00:17:12.611 "adrfam": "IPv4", 00:17:12.611 "traddr": "10.0.0.1", 00:17:12.611 "trsvcid": "48696" 00:17:12.611 }, 00:17:12.611 "auth": { 00:17:12.611 "state": "completed", 00:17:12.611 "digest": "sha256", 00:17:12.611 "dhgroup": "ffdhe3072" 00:17:12.611 } 00:17:12.611 } 00:17:12.611 ]' 00:17:12.611 20:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.611 20:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.611 20:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.611 20:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:12.611 20:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.611 20:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.611 20:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.611 20:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.869 20:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZWY0Y2MwZTZmYjgxYWE5ZThiMzdmNjg1OWZlNDMzM2UxNTIwNGI0MmQwYTIyMjhm84729A==: --dhchap-ctrl-secret DHHC-1:01:Y2RjNzc1YmFjZTdkZWEzNWIyY2YxZGMzY2E1ZjA0NTQg28rl: 00:17:14.244 20:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.244 20:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:14.244 20:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.244 20:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.244 20:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.244 20:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.244 20:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:14.244 20:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:14.842 20:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:14.842 20:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.842 20:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:14.842 20:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:14.842 20:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:14.842 20:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.842 20:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:14.842 20:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.842 20:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.842 20:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.842 20:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:14.842 20:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:15.100 00:17:15.100 20:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.100 20:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.100 20:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.358 20:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.358 20:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.358 20:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.358 20:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.358 20:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.358 20:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.358 { 00:17:15.358 "cntlid": 23, 00:17:15.358 "qid": 0, 00:17:15.358 "state": "enabled", 00:17:15.358 "thread": "nvmf_tgt_poll_group_000", 00:17:15.358 "listen_address": { 00:17:15.358 "trtype": "TCP", 00:17:15.358 "adrfam": "IPv4", 00:17:15.358 "traddr": "10.0.0.2", 00:17:15.358 "trsvcid": "4420" 00:17:15.358 }, 00:17:15.358 "peer_address": { 00:17:15.358 "trtype": "TCP", 00:17:15.358 "adrfam": "IPv4", 00:17:15.358 "traddr": "10.0.0.1", 00:17:15.358 "trsvcid": "48716" 00:17:15.358 }, 00:17:15.358 "auth": { 00:17:15.358 "state": "completed", 00:17:15.358 "digest": "sha256", 00:17:15.358 "dhgroup": "ffdhe3072" 00:17:15.358 } 00:17:15.358 } 00:17:15.358 ]' 00:17:15.358 20:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.616 20:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.616 20:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.616 20:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:15.616 20:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.616 20:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.616 20:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.616 20:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.181 20:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NjI5YzZmOTAxMDIxZjllOGRkZTEwYmI4MjBlYTY0Y2ZiMWIxY2M5Y2JmYzVmNjI3ZDNlNzU4NjEwMDJmYjRiMVpBpJQ=: 00:17:17.113 20:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.113 20:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:17.113 20:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.113 20:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.113 20:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.113 20:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.113 20:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.113 20:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:17.113 20:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:17.679 20:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:17.679 20:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.679 20:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:17.679 20:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:17.679 20:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:17.679 20:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.679 20:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.679 20:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.679 20:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.679 20:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.679 20:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.679 20:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.244 00:17:18.244 20:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.244 20:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.244 20:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.810 20:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.810 20:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.810 20:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.810 20:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.810 20:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.810 20:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.810 { 00:17:18.810 "cntlid": 25, 00:17:18.810 "qid": 0, 00:17:18.810 "state": "enabled", 00:17:18.810 "thread": "nvmf_tgt_poll_group_000", 00:17:18.810 "listen_address": { 00:17:18.810 "trtype": "TCP", 00:17:18.810 "adrfam": "IPv4", 00:17:18.810 "traddr": "10.0.0.2", 00:17:18.810 "trsvcid": "4420" 00:17:18.810 }, 00:17:18.810 "peer_address": { 00:17:18.810 "trtype": "TCP", 00:17:18.810 "adrfam": "IPv4", 00:17:18.810 "traddr": "10.0.0.1", 00:17:18.810 "trsvcid": "45294" 00:17:18.810 }, 00:17:18.810 "auth": { 00:17:18.810 "state": "completed", 00:17:18.810 "digest": "sha256", 00:17:18.810 "dhgroup": "ffdhe4096" 00:17:18.810 } 00:17:18.810 } 00:17:18.810 ]' 00:17:18.810 20:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.810 20:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.810 20:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.810 20:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:18.810 20:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.810 20:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.810 20:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.810 20:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.376 20:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NDg5NTRiYzMyNGY1Njg0YTgwZjU3NmVmNTZlNTczYTMyMzU3MWI2YzQ4NjkwNWY4Na3Gew==: --dhchap-ctrl-secret DHHC-1:03:NWI0OGQ5ODNhYTQ5NzU3ZGM3MzQwNzNiMjlkMjgzZDNlZDBlNmQ3ZDkzOGJjOGJmODc3NWRhYzllNWZkMmI5Y9IF2OU=: 00:17:20.309 20:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.309 20:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:20.309 20:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.309 20:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.309 20:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.309 20:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.309 20:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:20.309 20:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:20.875 20:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:20.875 20:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.875 20:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:20.875 20:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:20.875 20:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:20.875 20:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.875 20:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.875 20:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.875 20:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.875 20:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.875 20:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.875 20:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.440 00:17:21.440 20:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.440 20:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.440 20:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.698 20:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.699 20:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.699 20:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.699 20:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.699 20:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.699 20:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.699 { 00:17:21.699 "cntlid": 27, 00:17:21.699 "qid": 0, 00:17:21.699 "state": "enabled", 00:17:21.699 "thread": "nvmf_tgt_poll_group_000", 00:17:21.699 "listen_address": { 00:17:21.699 "trtype": "TCP", 00:17:21.699 "adrfam": "IPv4", 00:17:21.699 "traddr": "10.0.0.2", 00:17:21.699 "trsvcid": "4420" 00:17:21.699 }, 00:17:21.699 "peer_address": { 00:17:21.699 "trtype": "TCP", 00:17:21.699 "adrfam": "IPv4", 00:17:21.699 "traddr": "10.0.0.1", 00:17:21.699 "trsvcid": "45314" 00:17:21.699 }, 00:17:21.699 "auth": { 00:17:21.699 "state": "completed", 00:17:21.699 "digest": "sha256", 00:17:21.699 "dhgroup": "ffdhe4096" 00:17:21.699 } 00:17:21.699 } 00:17:21.699 ]' 00:17:21.699 20:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.956 20:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.956 20:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.956 20:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:21.956 20:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.956 20:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.956 20:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.956 20:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.521 20:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YzNiYzczMDM3NTY3ZDdhZjM1NjA1MmE1ZGE0NjBjMjBjrVcB: --dhchap-ctrl-secret DHHC-1:02:YjlmNGEyZDFmM2I5NzQ0YjdkZmRkOTk5OTUyMTU5YWZmMWI1MDNjNjQ4NWU0NDUyM69n7Q==: 00:17:23.894 20:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.894 20:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:23.894 20:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.894 20:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.894 20:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.894 20:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.894 20:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:23.894 20:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:24.152 20:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:24.152 20:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.152 20:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:24.152 20:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:24.152 20:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:24.152 20:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.152 20:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.152 20:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.152 20:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.152 20:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.152 20:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.152 20:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.718 00:17:24.718 20:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.718 20:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.718 20:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.284 20:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.284 20:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.284 20:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.284 20:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.284 20:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.284 20:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.284 { 00:17:25.284 "cntlid": 29, 00:17:25.284 "qid": 0, 00:17:25.284 "state": "enabled", 00:17:25.284 "thread": "nvmf_tgt_poll_group_000", 00:17:25.284 "listen_address": { 00:17:25.284 "trtype": "TCP", 00:17:25.284 "adrfam": "IPv4", 00:17:25.284 "traddr": "10.0.0.2", 00:17:25.284 "trsvcid": "4420" 00:17:25.284 }, 00:17:25.284 "peer_address": { 00:17:25.284 "trtype": "TCP", 00:17:25.284 "adrfam": "IPv4", 00:17:25.284 "traddr": "10.0.0.1", 00:17:25.284 "trsvcid": "45352" 00:17:25.284 }, 00:17:25.284 "auth": { 00:17:25.284 "state": "completed", 00:17:25.284 "digest": "sha256", 00:17:25.284 "dhgroup": "ffdhe4096" 00:17:25.284 } 00:17:25.284 } 00:17:25.284 ]' 00:17:25.284 20:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.284 20:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:25.284 20:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.284 20:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.284 20:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.541 20:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.541 20:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.542 20:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.800 20:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZWY0Y2MwZTZmYjgxYWE5ZThiMzdmNjg1OWZlNDMzM2UxNTIwNGI0MmQwYTIyMjhm84729A==: --dhchap-ctrl-secret DHHC-1:01:Y2RjNzc1YmFjZTdkZWEzNWIyY2YxZGMzY2E1ZjA0NTQg28rl: 00:17:27.174 20:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.174 20:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:27.174 20:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.174 20:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.174 20:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.174 20:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.174 20:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:27.174 20:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:27.432 20:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:27.432 20:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.432 20:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:27.432 20:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:27.432 20:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:27.432 20:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.432 20:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:27.432 20:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.432 20:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.432 20:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.432 20:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:27.432 20:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:27.997 00:17:27.997 20:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.997 20:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.997 20:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.254 20:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.254 20:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.254 20:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.254 20:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.254 20:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.254 20:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.254 { 00:17:28.254 "cntlid": 31, 00:17:28.254 "qid": 0, 00:17:28.254 "state": "enabled", 00:17:28.254 "thread": "nvmf_tgt_poll_group_000", 00:17:28.254 "listen_address": { 00:17:28.254 "trtype": "TCP", 00:17:28.254 "adrfam": "IPv4", 00:17:28.254 "traddr": "10.0.0.2", 00:17:28.254 "trsvcid": "4420" 00:17:28.254 }, 00:17:28.254 "peer_address": { 00:17:28.254 "trtype": "TCP", 00:17:28.254 "adrfam": "IPv4", 00:17:28.254 "traddr": "10.0.0.1", 00:17:28.254 "trsvcid": "60154" 00:17:28.254 }, 00:17:28.254 "auth": { 00:17:28.254 "state": "completed", 00:17:28.254 "digest": "sha256", 00:17:28.254 "dhgroup": "ffdhe4096" 00:17:28.254 } 00:17:28.254 } 00:17:28.254 ]' 00:17:28.254 20:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.254 20:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.254 20:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.511 20:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:28.511 20:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.511 20:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.511 20:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.511 20:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.088 20:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NjI5YzZmOTAxMDIxZjllOGRkZTEwYmI4MjBlYTY0Y2ZiMWIxY2M5Y2JmYzVmNjI3ZDNlNzU4NjEwMDJmYjRiMVpBpJQ=: 00:17:30.460 20:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.460 20:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:30.460 20:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.460 20:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.460 20:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.460 20:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.460 20:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.460 20:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:30.460 20:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:30.718 20:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:30.718 20:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.718 20:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:30.718 20:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:30.718 20:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:30.718 20:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.718 20:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.718 20:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.718 20:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.718 20:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.718 20:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.718 20:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.284 00:17:31.284 20:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.284 20:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.284 20:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.849 20:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.849 20:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.849 20:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.849 20:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.849 20:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.849 20:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.849 { 00:17:31.849 "cntlid": 33, 00:17:31.849 "qid": 0, 00:17:31.849 "state": "enabled", 00:17:31.849 "thread": "nvmf_tgt_poll_group_000", 00:17:31.849 "listen_address": { 00:17:31.849 "trtype": "TCP", 00:17:31.849 "adrfam": "IPv4", 00:17:31.849 "traddr": "10.0.0.2", 00:17:31.849 "trsvcid": "4420" 00:17:31.849 }, 00:17:31.849 "peer_address": { 00:17:31.849 "trtype": "TCP", 00:17:31.849 "adrfam": "IPv4", 00:17:31.849 "traddr": "10.0.0.1", 00:17:31.849 "trsvcid": "60194" 00:17:31.849 }, 00:17:31.849 "auth": { 00:17:31.849 "state": "completed", 00:17:31.849 "digest": "sha256", 00:17:31.849 "dhgroup": "ffdhe6144" 00:17:31.849 } 00:17:31.849 } 00:17:31.849 ]' 00:17:31.849 20:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.849 20:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.849 20:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.107 20:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:32.107 20:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.107 20:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.107 20:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.107 20:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.674 20:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NDg5NTRiYzMyNGY1Njg0YTgwZjU3NmVmNTZlNTczYTMyMzU3MWI2YzQ4NjkwNWY4Na3Gew==: --dhchap-ctrl-secret DHHC-1:03:NWI0OGQ5ODNhYTQ5NzU3ZGM3MzQwNzNiMjlkMjgzZDNlZDBlNmQ3ZDkzOGJjOGJmODc3NWRhYzllNWZkMmI5Y9IF2OU=: 00:17:34.046 20:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.046 20:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:34.046 20:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.046 20:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.046 20:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.046 20:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.046 20:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:34.046 20:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:34.304 20:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:34.304 20:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.304 20:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:34.304 20:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:34.304 20:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:34.304 20:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.304 20:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.304 20:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.304 20:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.561 20:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.561 20:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.561 20:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.128 00:17:35.128 20:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.128 20:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.128 20:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.386 20:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.386 20:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.644 20:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.644 20:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.644 20:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.644 20:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.644 { 00:17:35.644 "cntlid": 35, 00:17:35.644 "qid": 0, 00:17:35.644 "state": "enabled", 00:17:35.644 "thread": "nvmf_tgt_poll_group_000", 00:17:35.644 "listen_address": { 00:17:35.644 "trtype": "TCP", 00:17:35.644 "adrfam": "IPv4", 00:17:35.644 "traddr": "10.0.0.2", 00:17:35.644 "trsvcid": "4420" 00:17:35.644 }, 00:17:35.644 "peer_address": { 00:17:35.644 "trtype": "TCP", 00:17:35.644 "adrfam": "IPv4", 00:17:35.644 "traddr": "10.0.0.1", 00:17:35.644 "trsvcid": "60228" 00:17:35.644 }, 00:17:35.644 "auth": { 00:17:35.644 "state": "completed", 00:17:35.644 "digest": "sha256", 00:17:35.644 "dhgroup": "ffdhe6144" 00:17:35.644 } 00:17:35.644 } 00:17:35.644 ]' 00:17:35.644 20:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.644 20:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:35.644 20:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.644 20:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:35.644 20:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.644 20:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.644 20:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.644 20:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.578 20:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YzNiYzczMDM3NTY3ZDdhZjM1NjA1MmE1ZGE0NjBjMjBjrVcB: --dhchap-ctrl-secret DHHC-1:02:YjlmNGEyZDFmM2I5NzQ0YjdkZmRkOTk5OTUyMTU5YWZmMWI1MDNjNjQ4NWU0NDUyM69n7Q==: 00:17:37.512 20:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.512 20:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:37.512 20:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.512 20:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.512 20:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.512 20:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.512 20:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:37.512 20:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:38.078 20:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:38.078 20:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.078 20:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:38.078 20:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:38.078 20:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:38.078 20:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.078 20:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.078 20:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.078 20:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.078 20:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.079 20:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.079 20:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.012 00:17:39.012 20:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.012 20:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.012 20:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.269 20:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.269 20:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.269 20:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.269 20:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.269 20:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.269 20:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.269 { 00:17:39.269 "cntlid": 37, 00:17:39.269 "qid": 0, 00:17:39.269 "state": "enabled", 00:17:39.269 "thread": "nvmf_tgt_poll_group_000", 00:17:39.269 "listen_address": { 00:17:39.269 "trtype": "TCP", 00:17:39.269 "adrfam": "IPv4", 00:17:39.269 "traddr": "10.0.0.2", 00:17:39.269 "trsvcid": "4420" 00:17:39.269 }, 00:17:39.269 "peer_address": { 00:17:39.269 "trtype": "TCP", 00:17:39.269 "adrfam": "IPv4", 00:17:39.269 "traddr": "10.0.0.1", 00:17:39.269 "trsvcid": "46900" 00:17:39.269 }, 00:17:39.269 "auth": { 00:17:39.269 "state": "completed", 00:17:39.269 "digest": "sha256", 00:17:39.269 "dhgroup": "ffdhe6144" 00:17:39.269 } 00:17:39.269 } 00:17:39.269 ]' 00:17:39.269 20:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.527 20:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:39.527 20:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.527 20:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:39.527 20:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.527 20:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.527 20:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.527 20:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.093 20:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZWY0Y2MwZTZmYjgxYWE5ZThiMzdmNjg1OWZlNDMzM2UxNTIwNGI0MmQwYTIyMjhm84729A==: --dhchap-ctrl-secret DHHC-1:01:Y2RjNzc1YmFjZTdkZWEzNWIyY2YxZGMzY2E1ZjA0NTQg28rl: 00:17:41.026 20:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.026 20:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:41.026 20:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.026 20:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.026 20:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.026 20:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.026 20:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:41.026 20:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:41.592 20:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:41.592 20:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.592 20:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:41.592 20:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:41.592 20:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:41.592 20:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.592 20:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:41.592 20:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.592 20:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.592 20:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.592 20:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.592 20:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.525 00:17:42.525 20:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.525 20:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.525 20:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.783 20:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.783 20:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.783 20:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.783 20:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.783 20:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.783 20:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.783 { 00:17:42.783 "cntlid": 39, 00:17:42.783 "qid": 0, 00:17:42.783 "state": "enabled", 00:17:42.783 "thread": "nvmf_tgt_poll_group_000", 00:17:42.783 "listen_address": { 00:17:42.783 "trtype": "TCP", 00:17:42.783 "adrfam": "IPv4", 00:17:42.783 "traddr": "10.0.0.2", 00:17:42.783 "trsvcid": "4420" 00:17:42.783 }, 00:17:42.783 "peer_address": { 00:17:42.783 "trtype": "TCP", 00:17:42.783 "adrfam": "IPv4", 00:17:42.783 "traddr": "10.0.0.1", 00:17:42.783 "trsvcid": "46928" 00:17:42.783 }, 00:17:42.783 "auth": { 00:17:42.783 "state": "completed", 00:17:42.783 "digest": "sha256", 00:17:42.783 "dhgroup": "ffdhe6144" 00:17:42.783 } 00:17:42.783 } 00:17:42.783 ]' 00:17:42.783 20:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.783 20:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:42.783 20:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.783 20:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:42.783 20:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.041 20:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.041 20:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.041 20:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.318 20:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NjI5YzZmOTAxMDIxZjllOGRkZTEwYmI4MjBlYTY0Y2ZiMWIxY2M5Y2JmYzVmNjI3ZDNlNzU4NjEwMDJmYjRiMVpBpJQ=: 00:17:44.276 20:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.276 20:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:44.276 20:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.276 20:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.534 20:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.534 20:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.534 20:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.534 20:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:44.534 20:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:44.792 20:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:44.792 20:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.792 20:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:44.792 20:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:44.792 20:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:44.792 20:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.792 20:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.792 20:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.792 20:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.792 20:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.792 20:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.792 20:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.165 00:17:46.165 20:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.165 20:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.165 20:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.423 20:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.423 20:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.423 20:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.423 20:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.423 20:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.423 20:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.423 { 00:17:46.423 "cntlid": 41, 00:17:46.423 "qid": 0, 00:17:46.423 "state": "enabled", 00:17:46.423 "thread": "nvmf_tgt_poll_group_000", 00:17:46.423 "listen_address": { 00:17:46.423 "trtype": "TCP", 00:17:46.423 "adrfam": "IPv4", 00:17:46.423 "traddr": "10.0.0.2", 00:17:46.423 "trsvcid": "4420" 00:17:46.423 }, 00:17:46.423 "peer_address": { 00:17:46.423 "trtype": "TCP", 00:17:46.423 "adrfam": "IPv4", 00:17:46.423 "traddr": "10.0.0.1", 00:17:46.423 "trsvcid": "49334" 00:17:46.423 }, 00:17:46.423 "auth": { 00:17:46.423 "state": "completed", 00:17:46.423 "digest": "sha256", 00:17:46.423 "dhgroup": "ffdhe8192" 00:17:46.423 } 00:17:46.423 } 00:17:46.423 ]' 00:17:46.423 20:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.423 20:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.423 20:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.681 20:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:46.681 20:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.681 20:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.681 20:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.681 20:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.247 20:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NDg5NTRiYzMyNGY1Njg0YTgwZjU3NmVmNTZlNTczYTMyMzU3MWI2YzQ4NjkwNWY4Na3Gew==: --dhchap-ctrl-secret DHHC-1:03:NWI0OGQ5ODNhYTQ5NzU3ZGM3MzQwNzNiMjlkMjgzZDNlZDBlNmQ3ZDkzOGJjOGJmODc3NWRhYzllNWZkMmI5Y9IF2OU=: 00:17:48.617 20:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.617 20:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:48.617 20:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.617 20:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.617 20:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.617 20:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.617 20:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:48.617 20:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:48.874 20:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:48.874 20:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.874 20:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:48.874 20:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:48.874 20:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:48.874 20:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.875 20:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.875 20:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.875 20:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.875 20:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.875 20:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.875 20:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.253 00:17:50.253 20:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.253 20:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.253 20:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.511 20:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.511 20:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.511 20:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.511 20:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.511 20:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.511 20:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.511 { 00:17:50.511 "cntlid": 43, 00:17:50.511 "qid": 0, 00:17:50.511 "state": "enabled", 00:17:50.511 "thread": "nvmf_tgt_poll_group_000", 00:17:50.511 "listen_address": { 00:17:50.511 "trtype": "TCP", 00:17:50.511 "adrfam": "IPv4", 00:17:50.511 "traddr": "10.0.0.2", 00:17:50.511 "trsvcid": "4420" 00:17:50.511 }, 00:17:50.511 "peer_address": { 00:17:50.511 "trtype": "TCP", 00:17:50.511 "adrfam": "IPv4", 00:17:50.511 "traddr": "10.0.0.1", 00:17:50.511 "trsvcid": "49356" 00:17:50.511 }, 00:17:50.511 "auth": { 00:17:50.511 "state": "completed", 00:17:50.511 "digest": "sha256", 00:17:50.511 "dhgroup": "ffdhe8192" 00:17:50.511 } 00:17:50.511 } 00:17:50.511 ]' 00:17:50.511 20:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.511 20:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.511 20:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.511 20:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:50.511 20:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.511 20:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.511 20:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.511 20:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.077 20:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YzNiYzczMDM3NTY3ZDdhZjM1NjA1MmE1ZGE0NjBjMjBjrVcB: --dhchap-ctrl-secret DHHC-1:02:YjlmNGEyZDFmM2I5NzQ0YjdkZmRkOTk5OTUyMTU5YWZmMWI1MDNjNjQ4NWU0NDUyM69n7Q==: 00:17:52.450 20:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.450 20:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:52.450 20:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.450 20:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.450 20:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.450 20:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.450 20:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:52.450 20:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:52.708 20:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:52.708 20:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.708 20:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:52.708 20:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:52.708 20:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:52.708 20:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.708 20:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.708 20:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.708 20:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.708 20:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.708 20:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.708 20:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.640 00:17:53.640 20:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.641 20:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.641 20:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.206 20:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.206 20:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.206 20:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.206 20:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.206 20:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.206 20:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.206 { 00:17:54.206 "cntlid": 45, 00:17:54.206 "qid": 0, 00:17:54.206 "state": "enabled", 00:17:54.206 "thread": "nvmf_tgt_poll_group_000", 00:17:54.206 "listen_address": { 00:17:54.207 "trtype": "TCP", 00:17:54.207 "adrfam": "IPv4", 00:17:54.207 "traddr": "10.0.0.2", 00:17:54.207 "trsvcid": "4420" 00:17:54.207 }, 00:17:54.207 "peer_address": { 00:17:54.207 "trtype": "TCP", 00:17:54.207 "adrfam": "IPv4", 00:17:54.207 "traddr": "10.0.0.1", 00:17:54.207 "trsvcid": "49402" 00:17:54.207 }, 00:17:54.207 "auth": { 00:17:54.207 "state": "completed", 00:17:54.207 "digest": "sha256", 00:17:54.207 "dhgroup": "ffdhe8192" 00:17:54.207 } 00:17:54.207 } 00:17:54.207 ]' 00:17:54.207 20:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.207 20:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.207 20:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.207 20:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:54.207 20:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.465 20:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.465 20:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.465 20:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.722 20:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZWY0Y2MwZTZmYjgxYWE5ZThiMzdmNjg1OWZlNDMzM2UxNTIwNGI0MmQwYTIyMjhm84729A==: --dhchap-ctrl-secret DHHC-1:01:Y2RjNzc1YmFjZTdkZWEzNWIyY2YxZGMzY2E1ZjA0NTQg28rl: 00:17:56.095 20:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.095 20:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:56.095 20:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.095 20:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.095 20:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.095 20:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.095 20:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:56.095 20:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:56.095 20:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:56.095 20:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.095 20:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:56.095 20:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:56.095 20:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:56.095 20:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.095 20:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:56.095 20:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.095 20:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.095 20:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.095 20:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.095 20:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.469 00:17:57.469 20:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.469 20:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.469 20:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.727 20:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.727 20:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.727 20:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.727 20:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.727 20:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.727 20:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.727 { 00:17:57.727 "cntlid": 47, 00:17:57.727 "qid": 0, 00:17:57.727 "state": "enabled", 00:17:57.727 "thread": "nvmf_tgt_poll_group_000", 00:17:57.727 "listen_address": { 00:17:57.727 "trtype": "TCP", 00:17:57.727 "adrfam": "IPv4", 00:17:57.727 "traddr": "10.0.0.2", 00:17:57.727 "trsvcid": "4420" 00:17:57.727 }, 00:17:57.727 "peer_address": { 00:17:57.727 "trtype": "TCP", 00:17:57.727 "adrfam": "IPv4", 00:17:57.727 "traddr": "10.0.0.1", 00:17:57.727 "trsvcid": "40020" 00:17:57.727 }, 00:17:57.727 "auth": { 00:17:57.727 "state": "completed", 00:17:57.727 "digest": "sha256", 00:17:57.727 "dhgroup": "ffdhe8192" 00:17:57.727 } 00:17:57.727 } 00:17:57.727 ]' 00:17:57.727 20:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.727 20:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.727 20:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.727 20:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:57.727 20:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.727 20:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.727 20:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.727 20:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.293 20:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NjI5YzZmOTAxMDIxZjllOGRkZTEwYmI4MjBlYTY0Y2ZiMWIxY2M5Y2JmYzVmNjI3ZDNlNzU4NjEwMDJmYjRiMVpBpJQ=: 00:17:59.690 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.690 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:59.690 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.690 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.690 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.690 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:59.690 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.690 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.690 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:59.690 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:59.690 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:59.690 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.690 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:59.690 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:59.690 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:59.690 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.690 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.690 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.690 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.690 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.690 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.690 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.256 00:18:00.256 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.256 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.256 20:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.822 20:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.822 20:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.822 20:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.822 20:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.822 20:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.822 20:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.822 { 00:18:00.822 "cntlid": 49, 00:18:00.822 "qid": 0, 00:18:00.822 "state": "enabled", 00:18:00.822 "thread": "nvmf_tgt_poll_group_000", 00:18:00.822 "listen_address": { 00:18:00.822 "trtype": "TCP", 00:18:00.822 "adrfam": "IPv4", 00:18:00.822 "traddr": "10.0.0.2", 00:18:00.822 "trsvcid": "4420" 00:18:00.822 }, 00:18:00.822 "peer_address": { 00:18:00.822 "trtype": "TCP", 00:18:00.822 "adrfam": "IPv4", 00:18:00.822 "traddr": "10.0.0.1", 00:18:00.823 "trsvcid": "40030" 00:18:00.823 }, 00:18:00.823 "auth": { 00:18:00.823 "state": "completed", 00:18:00.823 "digest": "sha384", 00:18:00.823 "dhgroup": "null" 00:18:00.823 } 00:18:00.823 } 00:18:00.823 ]' 00:18:00.823 20:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.823 20:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:00.823 20:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.823 20:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:00.823 20:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.823 20:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.823 20:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.823 20:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.388 20:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NDg5NTRiYzMyNGY1Njg0YTgwZjU3NmVmNTZlNTczYTMyMzU3MWI2YzQ4NjkwNWY4Na3Gew==: --dhchap-ctrl-secret DHHC-1:03:NWI0OGQ5ODNhYTQ5NzU3ZGM3MzQwNzNiMjlkMjgzZDNlZDBlNmQ3ZDkzOGJjOGJmODc3NWRhYzllNWZkMmI5Y9IF2OU=: 00:18:02.761 20:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.761 20:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:02.761 20:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.761 20:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.761 20:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.761 20:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.761 20:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:02.761 20:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:02.761 20:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:02.761 20:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.762 20:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:02.762 20:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:02.762 20:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:02.762 20:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.762 20:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.762 20:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.762 20:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.019 20:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.019 20:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.019 20:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.277 00:18:03.277 20:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.277 20:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.277 20:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.535 20:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.535 20:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.535 20:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.535 20:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.535 20:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.535 20:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.535 { 00:18:03.535 "cntlid": 51, 00:18:03.535 "qid": 0, 00:18:03.535 "state": "enabled", 00:18:03.535 "thread": "nvmf_tgt_poll_group_000", 00:18:03.535 "listen_address": { 00:18:03.535 "trtype": "TCP", 00:18:03.535 "adrfam": "IPv4", 00:18:03.535 "traddr": "10.0.0.2", 00:18:03.535 "trsvcid": "4420" 00:18:03.535 }, 00:18:03.535 "peer_address": { 00:18:03.535 "trtype": "TCP", 00:18:03.535 "adrfam": "IPv4", 00:18:03.535 "traddr": "10.0.0.1", 00:18:03.535 "trsvcid": "40050" 00:18:03.535 }, 00:18:03.535 "auth": { 00:18:03.535 "state": "completed", 00:18:03.535 "digest": "sha384", 00:18:03.535 "dhgroup": "null" 00:18:03.535 } 00:18:03.535 } 00:18:03.535 ]' 00:18:03.535 20:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.793 20:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:03.793 20:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.793 20:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:03.793 20:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.793 20:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.793 20:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.793 20:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.359 20:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YzNiYzczMDM3NTY3ZDdhZjM1NjA1MmE1ZGE0NjBjMjBjrVcB: --dhchap-ctrl-secret DHHC-1:02:YjlmNGEyZDFmM2I5NzQ0YjdkZmRkOTk5OTUyMTU5YWZmMWI1MDNjNjQ4NWU0NDUyM69n7Q==: 00:18:05.292 20:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.550 20:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:05.550 20:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.550 20:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.550 20:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.550 20:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.550 20:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:05.551 20:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:06.117 20:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:06.117 20:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.117 20:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:06.117 20:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:06.117 20:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:06.117 20:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.117 20:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.117 20:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.117 20:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.117 20:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.117 20:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.117 20:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.375 00:18:06.375 20:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.375 20:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.375 20:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.941 20:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.941 20:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.941 20:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.941 20:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.941 20:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.941 20:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.941 { 00:18:06.941 "cntlid": 53, 00:18:06.941 "qid": 0, 00:18:06.941 "state": "enabled", 00:18:06.941 "thread": "nvmf_tgt_poll_group_000", 00:18:06.941 "listen_address": { 00:18:06.941 "trtype": "TCP", 00:18:06.941 "adrfam": "IPv4", 00:18:06.941 "traddr": "10.0.0.2", 00:18:06.941 "trsvcid": "4420" 00:18:06.941 }, 00:18:06.941 "peer_address": { 00:18:06.941 "trtype": "TCP", 00:18:06.941 "adrfam": "IPv4", 00:18:06.941 "traddr": "10.0.0.1", 00:18:06.941 "trsvcid": "49894" 00:18:06.941 }, 00:18:06.941 "auth": { 00:18:06.941 "state": "completed", 00:18:06.941 "digest": "sha384", 00:18:06.941 "dhgroup": "null" 00:18:06.941 } 00:18:06.941 } 00:18:06.941 ]' 00:18:06.941 20:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.941 20:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:06.941 20:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.941 20:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:06.941 20:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.941 20:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.941 20:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.941 20:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.199 20:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZWY0Y2MwZTZmYjgxYWE5ZThiMzdmNjg1OWZlNDMzM2UxNTIwNGI0MmQwYTIyMjhm84729A==: --dhchap-ctrl-secret DHHC-1:01:Y2RjNzc1YmFjZTdkZWEzNWIyY2YxZGMzY2E1ZjA0NTQg28rl: 00:18:08.571 20:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.571 20:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:08.571 20:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.571 20:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.572 20:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.572 20:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.572 20:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:08.572 20:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:08.830 20:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:08.830 20:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.830 20:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:08.830 20:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:08.830 20:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:08.830 20:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.830 20:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:08.830 20:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.830 20:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.830 20:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.830 20:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.830 20:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.396 00:18:09.396 20:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.396 20:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.396 20:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.972 20:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.972 20:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.972 20:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.972 20:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.972 20:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.972 20:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.972 { 00:18:09.972 "cntlid": 55, 00:18:09.972 "qid": 0, 00:18:09.972 "state": "enabled", 00:18:09.972 "thread": "nvmf_tgt_poll_group_000", 00:18:09.972 "listen_address": { 00:18:09.972 "trtype": "TCP", 00:18:09.972 "adrfam": "IPv4", 00:18:09.972 "traddr": "10.0.0.2", 00:18:09.972 "trsvcid": "4420" 00:18:09.972 }, 00:18:09.972 "peer_address": { 00:18:09.972 "trtype": "TCP", 00:18:09.972 "adrfam": "IPv4", 00:18:09.972 "traddr": "10.0.0.1", 00:18:09.972 "trsvcid": "49902" 00:18:09.972 }, 00:18:09.972 "auth": { 00:18:09.972 "state": "completed", 00:18:09.972 "digest": "sha384", 00:18:09.972 "dhgroup": "null" 00:18:09.972 } 00:18:09.972 } 00:18:09.972 ]' 00:18:09.972 20:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.972 20:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.972 20:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.972 20:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:09.972 20:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.229 20:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.229 20:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.230 20:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.794 20:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NjI5YzZmOTAxMDIxZjllOGRkZTEwYmI4MjBlYTY0Y2ZiMWIxY2M5Y2JmYzVmNjI3ZDNlNzU4NjEwMDJmYjRiMVpBpJQ=: 00:18:12.168 20:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.168 20:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:12.168 20:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.168 20:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.168 20:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.168 20:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.168 20:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.168 20:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:12.168 20:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:12.168 20:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:12.168 20:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.168 20:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:12.168 20:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:12.168 20:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:12.168 20:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.168 20:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.168 20:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.168 20:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.168 20:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.168 20:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.168 20:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.735 00:18:12.735 20:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.735 20:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.735 20:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.335 20:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.335 20:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.335 20:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.335 20:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.335 20:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.335 20:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.335 { 00:18:13.335 "cntlid": 57, 00:18:13.335 "qid": 0, 00:18:13.335 "state": "enabled", 00:18:13.335 "thread": "nvmf_tgt_poll_group_000", 00:18:13.335 "listen_address": { 00:18:13.335 "trtype": "TCP", 00:18:13.335 "adrfam": "IPv4", 00:18:13.335 "traddr": "10.0.0.2", 00:18:13.335 "trsvcid": "4420" 00:18:13.335 }, 00:18:13.335 "peer_address": { 00:18:13.335 "trtype": "TCP", 00:18:13.335 "adrfam": "IPv4", 00:18:13.335 "traddr": "10.0.0.1", 00:18:13.335 "trsvcid": "49936" 00:18:13.335 }, 00:18:13.335 "auth": { 00:18:13.335 "state": "completed", 00:18:13.335 "digest": "sha384", 00:18:13.335 "dhgroup": "ffdhe2048" 00:18:13.335 } 00:18:13.335 } 00:18:13.335 ]' 00:18:13.335 20:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.335 20:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:13.335 20:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.335 20:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:13.335 20:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.335 20:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.335 20:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.335 20:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.900 20:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NDg5NTRiYzMyNGY1Njg0YTgwZjU3NmVmNTZlNTczYTMyMzU3MWI2YzQ4NjkwNWY4Na3Gew==: --dhchap-ctrl-secret DHHC-1:03:NWI0OGQ5ODNhYTQ5NzU3ZGM3MzQwNzNiMjlkMjgzZDNlZDBlNmQ3ZDkzOGJjOGJmODc3NWRhYzllNWZkMmI5Y9IF2OU=: 00:18:15.271 20:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.271 20:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:15.271 20:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.271 20:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.271 20:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.271 20:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.271 20:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:15.271 20:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:15.529 20:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:15.529 20:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.529 20:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:15.529 20:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:15.529 20:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:15.529 20:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.529 20:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.529 20:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.529 20:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.529 20:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.529 20:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.529 20:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.095 00:18:16.095 20:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.095 20:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.095 20:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.354 20:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.354 20:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.354 20:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.354 20:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.354 20:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.354 20:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.354 { 00:18:16.354 "cntlid": 59, 00:18:16.354 "qid": 0, 00:18:16.354 "state": "enabled", 00:18:16.354 "thread": "nvmf_tgt_poll_group_000", 00:18:16.354 "listen_address": { 00:18:16.354 "trtype": "TCP", 00:18:16.354 "adrfam": "IPv4", 00:18:16.354 "traddr": "10.0.0.2", 00:18:16.354 "trsvcid": "4420" 00:18:16.354 }, 00:18:16.354 "peer_address": { 00:18:16.354 "trtype": "TCP", 00:18:16.354 "adrfam": "IPv4", 00:18:16.354 "traddr": "10.0.0.1", 00:18:16.354 "trsvcid": "48298" 00:18:16.354 }, 00:18:16.354 "auth": { 00:18:16.354 "state": "completed", 00:18:16.354 "digest": "sha384", 00:18:16.354 "dhgroup": "ffdhe2048" 00:18:16.354 } 00:18:16.354 } 00:18:16.354 ]' 00:18:16.354 20:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.354 20:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.354 20:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.354 20:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:16.354 20:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.612 20:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.612 20:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.612 20:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.177 20:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YzNiYzczMDM3NTY3ZDdhZjM1NjA1MmE1ZGE0NjBjMjBjrVcB: --dhchap-ctrl-secret DHHC-1:02:YjlmNGEyZDFmM2I5NzQ0YjdkZmRkOTk5OTUyMTU5YWZmMWI1MDNjNjQ4NWU0NDUyM69n7Q==: 00:18:18.550 20:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.550 20:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:18.550 20:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.550 20:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.551 20:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.551 20:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.551 20:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:18.551 20:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:18.808 20:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:18.808 20:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.808 20:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:18.808 20:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:18.808 20:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:18.808 20:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.808 20:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.808 20:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.808 20:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.066 20:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.066 20:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.066 20:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.324 00:18:19.324 20:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.324 20:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.324 20:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.889 20:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.889 20:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.889 20:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.889 20:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.889 20:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.889 20:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.889 { 00:18:19.889 "cntlid": 61, 00:18:19.889 "qid": 0, 00:18:19.889 "state": "enabled", 00:18:19.889 "thread": "nvmf_tgt_poll_group_000", 00:18:19.889 "listen_address": { 00:18:19.889 "trtype": "TCP", 00:18:19.889 "adrfam": "IPv4", 00:18:19.890 "traddr": "10.0.0.2", 00:18:19.890 "trsvcid": "4420" 00:18:19.890 }, 00:18:19.890 "peer_address": { 00:18:19.890 "trtype": "TCP", 00:18:19.890 "adrfam": "IPv4", 00:18:19.890 "traddr": "10.0.0.1", 00:18:19.890 "trsvcid": "48320" 00:18:19.890 }, 00:18:19.890 "auth": { 00:18:19.890 "state": "completed", 00:18:19.890 "digest": "sha384", 00:18:19.890 "dhgroup": "ffdhe2048" 00:18:19.890 } 00:18:19.890 } 00:18:19.890 ]' 00:18:19.890 20:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.890 20:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.890 20:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.890 20:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:19.890 20:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.890 20:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.890 20:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.890 20:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.455 20:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZWY0Y2MwZTZmYjgxYWE5ZThiMzdmNjg1OWZlNDMzM2UxNTIwNGI0MmQwYTIyMjhm84729A==: --dhchap-ctrl-secret DHHC-1:01:Y2RjNzc1YmFjZTdkZWEzNWIyY2YxZGMzY2E1ZjA0NTQg28rl: 00:18:21.827 20:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.827 20:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:21.827 20:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.827 20:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.827 20:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.827 20:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.827 20:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:21.827 20:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:22.085 20:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:22.085 20:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.085 20:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:22.085 20:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:22.085 20:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:22.085 20:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.085 20:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:22.085 20:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.085 20:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.085 20:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.085 20:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.085 20:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.343 00:18:22.343 20:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.343 20:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.343 20:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.908 20:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.908 20:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.908 20:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.908 20:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.908 20:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.908 20:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.908 { 00:18:22.908 "cntlid": 63, 00:18:22.908 "qid": 0, 00:18:22.908 "state": "enabled", 00:18:22.908 "thread": "nvmf_tgt_poll_group_000", 00:18:22.908 "listen_address": { 00:18:22.908 "trtype": "TCP", 00:18:22.908 "adrfam": "IPv4", 00:18:22.908 "traddr": "10.0.0.2", 00:18:22.908 "trsvcid": "4420" 00:18:22.908 }, 00:18:22.908 "peer_address": { 00:18:22.908 "trtype": "TCP", 00:18:22.908 "adrfam": "IPv4", 00:18:22.908 "traddr": "10.0.0.1", 00:18:22.908 "trsvcid": "48348" 00:18:22.908 }, 00:18:22.908 "auth": { 00:18:22.908 "state": "completed", 00:18:22.908 "digest": "sha384", 00:18:22.908 "dhgroup": "ffdhe2048" 00:18:22.908 } 00:18:22.908 } 00:18:22.908 ]' 00:18:22.908 20:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.908 20:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:22.908 20:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.908 20:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:22.908 20:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.908 20:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.908 20:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.908 20:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.475 20:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NjI5YzZmOTAxMDIxZjllOGRkZTEwYmI4MjBlYTY0Y2ZiMWIxY2M5Y2JmYzVmNjI3ZDNlNzU4NjEwMDJmYjRiMVpBpJQ=: 00:18:24.849 20:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.849 20:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:24.849 20:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.849 20:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.849 20:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.849 20:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:24.849 20:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.849 20:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:24.849 20:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:25.415 20:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:25.415 20:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.415 20:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:25.415 20:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:25.415 20:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:25.415 20:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.415 20:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.415 20:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.415 20:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.415 20:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.415 20:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.415 20:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.672 00:18:25.672 20:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.672 20:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.673 20:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.238 20:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.238 20:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.238 20:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.238 20:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.238 20:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.238 20:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.238 { 00:18:26.238 "cntlid": 65, 00:18:26.238 "qid": 0, 00:18:26.238 "state": "enabled", 00:18:26.238 "thread": "nvmf_tgt_poll_group_000", 00:18:26.238 "listen_address": { 00:18:26.238 "trtype": "TCP", 00:18:26.238 "adrfam": "IPv4", 00:18:26.238 "traddr": "10.0.0.2", 00:18:26.238 "trsvcid": "4420" 00:18:26.238 }, 00:18:26.238 "peer_address": { 00:18:26.238 "trtype": "TCP", 00:18:26.238 "adrfam": "IPv4", 00:18:26.238 "traddr": "10.0.0.1", 00:18:26.238 "trsvcid": "51174" 00:18:26.238 }, 00:18:26.238 "auth": { 00:18:26.238 "state": "completed", 00:18:26.238 "digest": "sha384", 00:18:26.238 "dhgroup": "ffdhe3072" 00:18:26.238 } 00:18:26.238 } 00:18:26.238 ]' 00:18:26.238 20:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.238 20:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.238 20:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.238 20:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:26.238 20:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.238 20:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.239 20:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.239 20:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.806 20:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NDg5NTRiYzMyNGY1Njg0YTgwZjU3NmVmNTZlNTczYTMyMzU3MWI2YzQ4NjkwNWY4Na3Gew==: --dhchap-ctrl-secret DHHC-1:03:NWI0OGQ5ODNhYTQ5NzU3ZGM3MzQwNzNiMjlkMjgzZDNlZDBlNmQ3ZDkzOGJjOGJmODc3NWRhYzllNWZkMmI5Y9IF2OU=: 00:18:28.205 20:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.205 20:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:28.205 20:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.205 20:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.205 20:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.205 20:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.205 20:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:28.205 20:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:28.474 20:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:28.474 20:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.474 20:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:28.474 20:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:28.474 20:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:28.474 20:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.474 20:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.474 20:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.474 20:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.474 20:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.474 20:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.474 20:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.407 00:18:29.408 20:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.408 20:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.408 20:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.679 20:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.679 20:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.679 20:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.679 20:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.679 20:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.679 20:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.679 { 00:18:29.679 "cntlid": 67, 00:18:29.679 "qid": 0, 00:18:29.679 "state": "enabled", 00:18:29.679 "thread": "nvmf_tgt_poll_group_000", 00:18:29.679 "listen_address": { 00:18:29.679 "trtype": "TCP", 00:18:29.679 "adrfam": "IPv4", 00:18:29.679 "traddr": "10.0.0.2", 00:18:29.679 "trsvcid": "4420" 00:18:29.679 }, 00:18:29.679 "peer_address": { 00:18:29.679 "trtype": "TCP", 00:18:29.679 "adrfam": "IPv4", 00:18:29.679 "traddr": "10.0.0.1", 00:18:29.679 "trsvcid": "51210" 00:18:29.679 }, 00:18:29.679 "auth": { 00:18:29.679 "state": "completed", 00:18:29.679 "digest": "sha384", 00:18:29.679 "dhgroup": "ffdhe3072" 00:18:29.679 } 00:18:29.679 } 00:18:29.679 ]' 00:18:29.679 20:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.679 20:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.679 20:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.944 20:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:29.944 20:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.944 20:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.944 20:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.944 20:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.202 20:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YzNiYzczMDM3NTY3ZDdhZjM1NjA1MmE1ZGE0NjBjMjBjrVcB: --dhchap-ctrl-secret DHHC-1:02:YjlmNGEyZDFmM2I5NzQ0YjdkZmRkOTk5OTUyMTU5YWZmMWI1MDNjNjQ4NWU0NDUyM69n7Q==: 00:18:31.576 20:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.576 20:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:31.576 20:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.576 20:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.576 20:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.576 20:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.576 20:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:31.576 20:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:32.140 20:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:32.140 20:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.140 20:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:32.140 20:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:32.140 20:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:32.140 20:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.140 20:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.140 20:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.140 20:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.140 20:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.140 20:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.140 20:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.706 00:18:32.706 20:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.706 20:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.706 20:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.272 20:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.272 20:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.272 20:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.272 20:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.272 20:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.272 20:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.272 { 00:18:33.272 "cntlid": 69, 00:18:33.272 "qid": 0, 00:18:33.272 "state": "enabled", 00:18:33.272 "thread": "nvmf_tgt_poll_group_000", 00:18:33.272 "listen_address": { 00:18:33.272 "trtype": "TCP", 00:18:33.272 "adrfam": "IPv4", 00:18:33.272 "traddr": "10.0.0.2", 00:18:33.272 "trsvcid": "4420" 00:18:33.272 }, 00:18:33.272 "peer_address": { 00:18:33.272 "trtype": "TCP", 00:18:33.272 "adrfam": "IPv4", 00:18:33.272 "traddr": "10.0.0.1", 00:18:33.272 "trsvcid": "51250" 00:18:33.272 }, 00:18:33.272 "auth": { 00:18:33.272 "state": "completed", 00:18:33.272 "digest": "sha384", 00:18:33.272 "dhgroup": "ffdhe3072" 00:18:33.272 } 00:18:33.272 } 00:18:33.272 ]' 00:18:33.272 20:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.272 20:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.272 20:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.272 20:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:33.272 20:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.272 20:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.272 20:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.272 20:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.838 20:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZWY0Y2MwZTZmYjgxYWE5ZThiMzdmNjg1OWZlNDMzM2UxNTIwNGI0MmQwYTIyMjhm84729A==: --dhchap-ctrl-secret DHHC-1:01:Y2RjNzc1YmFjZTdkZWEzNWIyY2YxZGMzY2E1ZjA0NTQg28rl: 00:18:34.770 20:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.770 20:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:34.770 20:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.770 20:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.770 20:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.770 20:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.770 20:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:34.770 20:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:35.336 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:35.336 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.336 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:35.336 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:35.336 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:35.336 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.336 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:35.336 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.336 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.336 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.336 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.336 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:36.269 00:18:36.269 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.269 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.269 20:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.526 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.526 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.526 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.526 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.526 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.526 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.526 { 00:18:36.526 "cntlid": 71, 00:18:36.526 "qid": 0, 00:18:36.526 "state": "enabled", 00:18:36.526 "thread": "nvmf_tgt_poll_group_000", 00:18:36.526 "listen_address": { 00:18:36.526 "trtype": "TCP", 00:18:36.526 "adrfam": "IPv4", 00:18:36.526 "traddr": "10.0.0.2", 00:18:36.526 "trsvcid": "4420" 00:18:36.526 }, 00:18:36.526 "peer_address": { 00:18:36.526 "trtype": "TCP", 00:18:36.526 "adrfam": "IPv4", 00:18:36.526 "traddr": "10.0.0.1", 00:18:36.526 "trsvcid": "52730" 00:18:36.526 }, 00:18:36.526 "auth": { 00:18:36.526 "state": "completed", 00:18:36.527 "digest": "sha384", 00:18:36.527 "dhgroup": "ffdhe3072" 00:18:36.527 } 00:18:36.527 } 00:18:36.527 ]' 00:18:36.527 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.527 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.527 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.527 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:36.527 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.527 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.527 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.527 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.092 20:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NjI5YzZmOTAxMDIxZjllOGRkZTEwYmI4MjBlYTY0Y2ZiMWIxY2M5Y2JmYzVmNjI3ZDNlNzU4NjEwMDJmYjRiMVpBpJQ=: 00:18:38.465 20:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.465 20:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:38.465 20:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.465 20:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.465 20:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.465 20:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.465 20:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.465 20:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:38.465 20:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:39.030 20:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:39.030 20:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.030 20:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:39.030 20:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:39.030 20:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:39.030 20:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.030 20:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.030 20:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.030 20:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.030 20:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.030 20:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.030 20:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.288 00:18:39.288 20:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.288 20:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.288 20:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.852 20:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.852 20:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.852 20:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.852 20:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.110 20:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.110 20:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.110 { 00:18:40.110 "cntlid": 73, 00:18:40.110 "qid": 0, 00:18:40.110 "state": "enabled", 00:18:40.110 "thread": "nvmf_tgt_poll_group_000", 00:18:40.110 "listen_address": { 00:18:40.110 "trtype": "TCP", 00:18:40.110 "adrfam": "IPv4", 00:18:40.110 "traddr": "10.0.0.2", 00:18:40.110 "trsvcid": "4420" 00:18:40.110 }, 00:18:40.110 "peer_address": { 00:18:40.110 "trtype": "TCP", 00:18:40.110 "adrfam": "IPv4", 00:18:40.110 "traddr": "10.0.0.1", 00:18:40.110 "trsvcid": "52756" 00:18:40.110 }, 00:18:40.110 "auth": { 00:18:40.110 "state": "completed", 00:18:40.110 "digest": "sha384", 00:18:40.110 "dhgroup": "ffdhe4096" 00:18:40.110 } 00:18:40.110 } 00:18:40.110 ]' 00:18:40.110 20:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.110 20:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.110 20:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.110 20:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:40.110 20:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.110 20:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.110 20:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.110 20:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.367 20:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NDg5NTRiYzMyNGY1Njg0YTgwZjU3NmVmNTZlNTczYTMyMzU3MWI2YzQ4NjkwNWY4Na3Gew==: --dhchap-ctrl-secret DHHC-1:03:NWI0OGQ5ODNhYTQ5NzU3ZGM3MzQwNzNiMjlkMjgzZDNlZDBlNmQ3ZDkzOGJjOGJmODc3NWRhYzllNWZkMmI5Y9IF2OU=: 00:18:41.740 20:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.740 20:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:41.740 20:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.740 20:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.740 20:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.740 20:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.740 20:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:41.740 20:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:41.998 20:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:41.998 20:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.998 20:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:41.998 20:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:41.998 20:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:41.998 20:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.998 20:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.998 20:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.998 20:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.998 20:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.998 20:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.998 20:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.562 00:18:42.562 20:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.563 20:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.563 20:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.158 20:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.158 20:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.158 20:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.158 20:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.158 20:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.158 20:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.158 { 00:18:43.158 "cntlid": 75, 00:18:43.158 "qid": 0, 00:18:43.158 "state": "enabled", 00:18:43.158 "thread": "nvmf_tgt_poll_group_000", 00:18:43.158 "listen_address": { 00:18:43.158 "trtype": "TCP", 00:18:43.158 "adrfam": "IPv4", 00:18:43.158 "traddr": "10.0.0.2", 00:18:43.158 "trsvcid": "4420" 00:18:43.158 }, 00:18:43.158 "peer_address": { 00:18:43.158 "trtype": "TCP", 00:18:43.158 "adrfam": "IPv4", 00:18:43.158 "traddr": "10.0.0.1", 00:18:43.158 "trsvcid": "52780" 00:18:43.158 }, 00:18:43.158 "auth": { 00:18:43.158 "state": "completed", 00:18:43.158 "digest": "sha384", 00:18:43.158 "dhgroup": "ffdhe4096" 00:18:43.158 } 00:18:43.158 } 00:18:43.158 ]' 00:18:43.158 20:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.158 20:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.158 20:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.416 20:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:43.416 20:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.416 20:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.416 20:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.416 20:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.981 20:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YzNiYzczMDM3NTY3ZDdhZjM1NjA1MmE1ZGE0NjBjMjBjrVcB: --dhchap-ctrl-secret DHHC-1:02:YjlmNGEyZDFmM2I5NzQ0YjdkZmRkOTk5OTUyMTU5YWZmMWI1MDNjNjQ4NWU0NDUyM69n7Q==: 00:18:45.354 20:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.354 20:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:45.354 20:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.355 20:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.355 20:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.355 20:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.355 20:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:45.355 20:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:45.612 20:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:45.612 20:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.612 20:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:45.612 20:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:45.612 20:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:45.612 20:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.612 20:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.612 20:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.612 20:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.612 20:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.613 20:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.613 20:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.545 00:18:46.545 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.545 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.545 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.803 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.803 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.803 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.803 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.803 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.803 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.803 { 00:18:46.803 "cntlid": 77, 00:18:46.803 "qid": 0, 00:18:46.803 "state": "enabled", 00:18:46.803 "thread": "nvmf_tgt_poll_group_000", 00:18:46.803 "listen_address": { 00:18:46.803 "trtype": "TCP", 00:18:46.803 "adrfam": "IPv4", 00:18:46.803 "traddr": "10.0.0.2", 00:18:46.803 "trsvcid": "4420" 00:18:46.803 }, 00:18:46.803 "peer_address": { 00:18:46.803 "trtype": "TCP", 00:18:46.803 "adrfam": "IPv4", 00:18:46.803 "traddr": "10.0.0.1", 00:18:46.803 "trsvcid": "32772" 00:18:46.803 }, 00:18:46.803 "auth": { 00:18:46.803 "state": "completed", 00:18:46.803 "digest": "sha384", 00:18:46.803 "dhgroup": "ffdhe4096" 00:18:46.803 } 00:18:46.803 } 00:18:46.803 ]' 00:18:46.803 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.803 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.803 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.803 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:46.803 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.061 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.061 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.061 20:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.627 20:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZWY0Y2MwZTZmYjgxYWE5ZThiMzdmNjg1OWZlNDMzM2UxNTIwNGI0MmQwYTIyMjhm84729A==: --dhchap-ctrl-secret DHHC-1:01:Y2RjNzc1YmFjZTdkZWEzNWIyY2YxZGMzY2E1ZjA0NTQg28rl: 00:18:48.561 20:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.561 20:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:48.561 20:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.561 20:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.561 20:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.561 20:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.561 20:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:48.561 20:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:49.128 20:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:49.128 20:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.128 20:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:49.128 20:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:49.128 20:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:49.128 20:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.128 20:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:49.128 20:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.128 20:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.128 20:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.128 20:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.128 20:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.704 00:18:49.704 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.704 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.704 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.965 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.965 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.965 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.965 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.965 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.965 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.965 { 00:18:49.965 "cntlid": 79, 00:18:49.965 "qid": 0, 00:18:49.965 "state": "enabled", 00:18:49.965 "thread": "nvmf_tgt_poll_group_000", 00:18:49.965 "listen_address": { 00:18:49.965 "trtype": "TCP", 00:18:49.965 "adrfam": "IPv4", 00:18:49.965 "traddr": "10.0.0.2", 00:18:49.965 "trsvcid": "4420" 00:18:49.965 }, 00:18:49.965 "peer_address": { 00:18:49.965 "trtype": "TCP", 00:18:49.965 "adrfam": "IPv4", 00:18:49.965 "traddr": "10.0.0.1", 00:18:49.965 "trsvcid": "32802" 00:18:49.965 }, 00:18:49.965 "auth": { 00:18:49.965 "state": "completed", 00:18:49.965 "digest": "sha384", 00:18:49.965 "dhgroup": "ffdhe4096" 00:18:49.965 } 00:18:49.965 } 00:18:49.965 ]' 00:18:49.966 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.966 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.966 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.966 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:49.966 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.224 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.224 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.224 20:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.482 20:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NjI5YzZmOTAxMDIxZjllOGRkZTEwYmI4MjBlYTY0Y2ZiMWIxY2M5Y2JmYzVmNjI3ZDNlNzU4NjEwMDJmYjRiMVpBpJQ=: 00:18:51.855 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.855 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:51.855 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.855 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.855 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.855 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.855 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.855 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:51.855 20:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:52.421 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:52.421 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.421 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:52.421 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:52.421 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:52.421 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.421 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.421 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.421 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.421 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.421 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.421 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.354 00:18:53.354 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.354 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.354 20:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.612 20:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.612 20:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.612 20:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.612 20:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.612 20:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.612 20:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.612 { 00:18:53.612 "cntlid": 81, 00:18:53.612 "qid": 0, 00:18:53.612 "state": "enabled", 00:18:53.612 "thread": "nvmf_tgt_poll_group_000", 00:18:53.612 "listen_address": { 00:18:53.612 "trtype": "TCP", 00:18:53.612 "adrfam": "IPv4", 00:18:53.612 "traddr": "10.0.0.2", 00:18:53.612 "trsvcid": "4420" 00:18:53.612 }, 00:18:53.612 "peer_address": { 00:18:53.612 "trtype": "TCP", 00:18:53.612 "adrfam": "IPv4", 00:18:53.612 "traddr": "10.0.0.1", 00:18:53.612 "trsvcid": "32814" 00:18:53.612 }, 00:18:53.612 "auth": { 00:18:53.612 "state": "completed", 00:18:53.612 "digest": "sha384", 00:18:53.612 "dhgroup": "ffdhe6144" 00:18:53.612 } 00:18:53.612 } 00:18:53.612 ]' 00:18:53.612 20:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.870 20:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.870 20:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.870 20:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:53.870 20:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.870 20:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.870 20:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.870 20:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.436 20:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NDg5NTRiYzMyNGY1Njg0YTgwZjU3NmVmNTZlNTczYTMyMzU3MWI2YzQ4NjkwNWY4Na3Gew==: --dhchap-ctrl-secret DHHC-1:03:NWI0OGQ5ODNhYTQ5NzU3ZGM3MzQwNzNiMjlkMjgzZDNlZDBlNmQ3ZDkzOGJjOGJmODc3NWRhYzllNWZkMmI5Y9IF2OU=: 00:18:55.809 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.809 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:55.809 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.809 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.809 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.809 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.809 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:55.809 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:56.067 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:56.067 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.067 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:56.067 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:56.067 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:56.067 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.067 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.067 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.067 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.067 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.067 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.067 20:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.002 00:18:57.002 20:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.002 20:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.002 20:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.268 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.268 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.268 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.268 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.542 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.542 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.542 { 00:18:57.542 "cntlid": 83, 00:18:57.542 "qid": 0, 00:18:57.542 "state": "enabled", 00:18:57.542 "thread": "nvmf_tgt_poll_group_000", 00:18:57.542 "listen_address": { 00:18:57.542 "trtype": "TCP", 00:18:57.542 "adrfam": "IPv4", 00:18:57.542 "traddr": "10.0.0.2", 00:18:57.542 "trsvcid": "4420" 00:18:57.542 }, 00:18:57.542 "peer_address": { 00:18:57.542 "trtype": "TCP", 00:18:57.542 "adrfam": "IPv4", 00:18:57.542 "traddr": "10.0.0.1", 00:18:57.542 "trsvcid": "35586" 00:18:57.542 }, 00:18:57.542 "auth": { 00:18:57.542 "state": "completed", 00:18:57.542 "digest": "sha384", 00:18:57.542 "dhgroup": "ffdhe6144" 00:18:57.542 } 00:18:57.542 } 00:18:57.542 ]' 00:18:57.542 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.542 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.542 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.542 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:57.542 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.542 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.543 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.543 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.126 20:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YzNiYzczMDM3NTY3ZDdhZjM1NjA1MmE1ZGE0NjBjMjBjrVcB: --dhchap-ctrl-secret DHHC-1:02:YjlmNGEyZDFmM2I5NzQ0YjdkZmRkOTk5OTUyMTU5YWZmMWI1MDNjNjQ4NWU0NDUyM69n7Q==: 00:18:59.499 20:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.499 20:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:59.499 20:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.499 20:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.499 20:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.499 20:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.499 20:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:59.499 20:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:00.064 20:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:00.064 20:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.064 20:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:00.064 20:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:00.064 20:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:00.064 20:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.064 20:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.064 20:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.064 20:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.064 20:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.064 20:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.064 20:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.630 00:19:00.630 20:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.630 20:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.630 20:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.195 20:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.195 20:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.195 20:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.195 20:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.195 20:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.195 20:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.195 { 00:19:01.195 "cntlid": 85, 00:19:01.195 "qid": 0, 00:19:01.195 "state": "enabled", 00:19:01.195 "thread": "nvmf_tgt_poll_group_000", 00:19:01.195 "listen_address": { 00:19:01.195 "trtype": "TCP", 00:19:01.195 "adrfam": "IPv4", 00:19:01.195 "traddr": "10.0.0.2", 00:19:01.195 "trsvcid": "4420" 00:19:01.195 }, 00:19:01.195 "peer_address": { 00:19:01.195 "trtype": "TCP", 00:19:01.195 "adrfam": "IPv4", 00:19:01.195 "traddr": "10.0.0.1", 00:19:01.195 "trsvcid": "35622" 00:19:01.195 }, 00:19:01.195 "auth": { 00:19:01.195 "state": "completed", 00:19:01.195 "digest": "sha384", 00:19:01.195 "dhgroup": "ffdhe6144" 00:19:01.195 } 00:19:01.195 } 00:19:01.195 ]' 00:19:01.195 20:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.196 20:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.196 20:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.196 20:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:01.196 20:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.196 20:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.196 20:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.196 20:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.761 20:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZWY0Y2MwZTZmYjgxYWE5ZThiMzdmNjg1OWZlNDMzM2UxNTIwNGI0MmQwYTIyMjhm84729A==: --dhchap-ctrl-secret DHHC-1:01:Y2RjNzc1YmFjZTdkZWEzNWIyY2YxZGMzY2E1ZjA0NTQg28rl: 00:19:03.134 20:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.134 20:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:03.134 20:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.134 20:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.134 20:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.134 20:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.134 20:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:03.134 20:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:03.700 20:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:03.700 20:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.700 20:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:03.700 20:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:03.700 20:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:03.700 20:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.700 20:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:03.700 20:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.700 20:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.700 20:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.700 20:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:03.700 20:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:04.266 00:19:04.266 20:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.266 20:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.266 20:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.832 20:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.832 20:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.832 20:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.832 20:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.832 20:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.832 20:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.832 { 00:19:04.832 "cntlid": 87, 00:19:04.832 "qid": 0, 00:19:04.832 "state": "enabled", 00:19:04.832 "thread": "nvmf_tgt_poll_group_000", 00:19:04.832 "listen_address": { 00:19:04.832 "trtype": "TCP", 00:19:04.832 "adrfam": "IPv4", 00:19:04.832 "traddr": "10.0.0.2", 00:19:04.832 "trsvcid": "4420" 00:19:04.832 }, 00:19:04.832 "peer_address": { 00:19:04.832 "trtype": "TCP", 00:19:04.832 "adrfam": "IPv4", 00:19:04.832 "traddr": "10.0.0.1", 00:19:04.832 "trsvcid": "35644" 00:19:04.832 }, 00:19:04.832 "auth": { 00:19:04.832 "state": "completed", 00:19:04.832 "digest": "sha384", 00:19:04.832 "dhgroup": "ffdhe6144" 00:19:04.832 } 00:19:04.832 } 00:19:04.832 ]' 00:19:04.832 20:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.832 20:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:04.832 20:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.090 20:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:05.090 20:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.090 20:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.090 20:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.090 20:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.655 20:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NjI5YzZmOTAxMDIxZjllOGRkZTEwYmI4MjBlYTY0Y2ZiMWIxY2M5Y2JmYzVmNjI3ZDNlNzU4NjEwMDJmYjRiMVpBpJQ=: 00:19:07.026 20:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.026 20:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:07.026 20:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.026 20:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.026 20:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.026 20:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.026 20:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.026 20:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:07.026 20:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:07.284 20:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:07.284 20:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.284 20:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:07.284 20:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:07.284 20:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:07.284 20:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.284 20:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.284 20:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.284 20:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.284 20:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.284 20:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.284 20:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.658 00:19:08.915 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.915 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.915 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.173 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.173 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.173 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.173 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.173 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.173 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.173 { 00:19:09.173 "cntlid": 89, 00:19:09.173 "qid": 0, 00:19:09.173 "state": "enabled", 00:19:09.173 "thread": "nvmf_tgt_poll_group_000", 00:19:09.173 "listen_address": { 00:19:09.173 "trtype": "TCP", 00:19:09.173 "adrfam": "IPv4", 00:19:09.173 "traddr": "10.0.0.2", 00:19:09.173 "trsvcid": "4420" 00:19:09.173 }, 00:19:09.173 "peer_address": { 00:19:09.173 "trtype": "TCP", 00:19:09.173 "adrfam": "IPv4", 00:19:09.173 "traddr": "10.0.0.1", 00:19:09.173 "trsvcid": "37992" 00:19:09.173 }, 00:19:09.173 "auth": { 00:19:09.173 "state": "completed", 00:19:09.173 "digest": "sha384", 00:19:09.173 "dhgroup": "ffdhe8192" 00:19:09.173 } 00:19:09.173 } 00:19:09.173 ]' 00:19:09.173 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.173 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:09.173 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.173 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:09.173 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.173 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.173 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.173 20:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.755 20:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NDg5NTRiYzMyNGY1Njg0YTgwZjU3NmVmNTZlNTczYTMyMzU3MWI2YzQ4NjkwNWY4Na3Gew==: --dhchap-ctrl-secret DHHC-1:03:NWI0OGQ5ODNhYTQ5NzU3ZGM3MzQwNzNiMjlkMjgzZDNlZDBlNmQ3ZDkzOGJjOGJmODc3NWRhYzllNWZkMmI5Y9IF2OU=: 00:19:11.135 20:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.135 20:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:11.135 20:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.135 20:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.135 20:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.135 20:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.135 20:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:11.135 20:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:11.393 20:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:11.393 20:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.393 20:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:11.393 20:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:11.393 20:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:11.393 20:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.393 20:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.393 20:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.393 20:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.393 20:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.393 20:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.393 20:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.767 00:19:12.767 20:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.767 20:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.767 20:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.045 20:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.045 20:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.045 20:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.045 20:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.045 20:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.045 20:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.045 { 00:19:13.045 "cntlid": 91, 00:19:13.045 "qid": 0, 00:19:13.045 "state": "enabled", 00:19:13.045 "thread": "nvmf_tgt_poll_group_000", 00:19:13.045 "listen_address": { 00:19:13.045 "trtype": "TCP", 00:19:13.045 "adrfam": "IPv4", 00:19:13.045 "traddr": "10.0.0.2", 00:19:13.045 "trsvcid": "4420" 00:19:13.045 }, 00:19:13.045 "peer_address": { 00:19:13.045 "trtype": "TCP", 00:19:13.045 "adrfam": "IPv4", 00:19:13.045 "traddr": "10.0.0.1", 00:19:13.045 "trsvcid": "38014" 00:19:13.045 }, 00:19:13.045 "auth": { 00:19:13.045 "state": "completed", 00:19:13.045 "digest": "sha384", 00:19:13.045 "dhgroup": "ffdhe8192" 00:19:13.045 } 00:19:13.045 } 00:19:13.045 ]' 00:19:13.045 20:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.045 20:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:13.045 20:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.045 20:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:13.045 20:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.309 20:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.309 20:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.309 20:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.874 20:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YzNiYzczMDM3NTY3ZDdhZjM1NjA1MmE1ZGE0NjBjMjBjrVcB: --dhchap-ctrl-secret DHHC-1:02:YjlmNGEyZDFmM2I5NzQ0YjdkZmRkOTk5OTUyMTU5YWZmMWI1MDNjNjQ4NWU0NDUyM69n7Q==: 00:19:15.249 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.249 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:15.249 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.249 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.249 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.249 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.249 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:15.249 20:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:15.249 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:15.249 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.249 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:15.249 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:15.249 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:15.249 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.249 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.249 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.249 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.249 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.249 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.249 20:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.148 00:19:17.148 20:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.148 20:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.148 20:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.406 20:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.406 20:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.406 20:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.406 20:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.406 20:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.406 20:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.406 { 00:19:17.406 "cntlid": 93, 00:19:17.406 "qid": 0, 00:19:17.406 "state": "enabled", 00:19:17.406 "thread": "nvmf_tgt_poll_group_000", 00:19:17.406 "listen_address": { 00:19:17.406 "trtype": "TCP", 00:19:17.406 "adrfam": "IPv4", 00:19:17.406 "traddr": "10.0.0.2", 00:19:17.406 "trsvcid": "4420" 00:19:17.406 }, 00:19:17.406 "peer_address": { 00:19:17.406 "trtype": "TCP", 00:19:17.406 "adrfam": "IPv4", 00:19:17.406 "traddr": "10.0.0.1", 00:19:17.406 "trsvcid": "56494" 00:19:17.406 }, 00:19:17.406 "auth": { 00:19:17.406 "state": "completed", 00:19:17.406 "digest": "sha384", 00:19:17.406 "dhgroup": "ffdhe8192" 00:19:17.406 } 00:19:17.406 } 00:19:17.406 ]' 00:19:17.406 20:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.406 20:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:17.406 20:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.406 20:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:17.406 20:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.663 20:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.664 20:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.664 20:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.229 20:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZWY0Y2MwZTZmYjgxYWE5ZThiMzdmNjg1OWZlNDMzM2UxNTIwNGI0MmQwYTIyMjhm84729A==: --dhchap-ctrl-secret DHHC-1:01:Y2RjNzc1YmFjZTdkZWEzNWIyY2YxZGMzY2E1ZjA0NTQg28rl: 00:19:19.602 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.602 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:19.602 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.602 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.602 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.602 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.602 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:19.602 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:19.860 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:19.860 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.860 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:19.860 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:19.860 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:19.860 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.860 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:19.860 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.860 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.860 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.860 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:19.860 20:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.234 00:19:21.234 20:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.234 20:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.234 20:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.492 20:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.492 20:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.492 20:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.492 20:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.492 20:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.492 20:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.492 { 00:19:21.492 "cntlid": 95, 00:19:21.492 "qid": 0, 00:19:21.492 "state": "enabled", 00:19:21.492 "thread": "nvmf_tgt_poll_group_000", 00:19:21.492 "listen_address": { 00:19:21.492 "trtype": "TCP", 00:19:21.492 "adrfam": "IPv4", 00:19:21.492 "traddr": "10.0.0.2", 00:19:21.492 "trsvcid": "4420" 00:19:21.492 }, 00:19:21.492 "peer_address": { 00:19:21.492 "trtype": "TCP", 00:19:21.492 "adrfam": "IPv4", 00:19:21.492 "traddr": "10.0.0.1", 00:19:21.492 "trsvcid": "56526" 00:19:21.492 }, 00:19:21.492 "auth": { 00:19:21.492 "state": "completed", 00:19:21.492 "digest": "sha384", 00:19:21.492 "dhgroup": "ffdhe8192" 00:19:21.492 } 00:19:21.492 } 00:19:21.492 ]' 00:19:21.492 20:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.492 20:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:21.492 20:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.492 20:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:21.492 20:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.492 20:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.492 20:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.492 20:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.058 20:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NjI5YzZmOTAxMDIxZjllOGRkZTEwYmI4MjBlYTY0Y2ZiMWIxY2M5Y2JmYzVmNjI3ZDNlNzU4NjEwMDJmYjRiMVpBpJQ=: 00:19:23.432 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.432 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:23.432 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.432 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.432 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.432 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:23.432 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.432 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.432 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:23.432 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:23.689 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:23.689 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.689 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:23.689 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:23.689 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:23.689 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.689 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.689 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.690 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.690 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.690 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.690 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.255 00:19:24.255 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.255 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.255 20:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.821 20:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.821 20:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.821 20:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.821 20:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.821 20:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.821 20:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.821 { 00:19:24.821 "cntlid": 97, 00:19:24.821 "qid": 0, 00:19:24.821 "state": "enabled", 00:19:24.821 "thread": "nvmf_tgt_poll_group_000", 00:19:24.821 "listen_address": { 00:19:24.821 "trtype": "TCP", 00:19:24.821 "adrfam": "IPv4", 00:19:24.821 "traddr": "10.0.0.2", 00:19:24.821 "trsvcid": "4420" 00:19:24.821 }, 00:19:24.821 "peer_address": { 00:19:24.821 "trtype": "TCP", 00:19:24.821 "adrfam": "IPv4", 00:19:24.821 "traddr": "10.0.0.1", 00:19:24.821 "trsvcid": "56556" 00:19:24.821 }, 00:19:24.821 "auth": { 00:19:24.821 "state": "completed", 00:19:24.821 "digest": "sha512", 00:19:24.821 "dhgroup": "null" 00:19:24.821 } 00:19:24.821 } 00:19:24.821 ]' 00:19:24.821 20:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.821 20:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.821 20:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.821 20:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:24.821 20:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.821 20:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.821 20:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.821 20:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.755 20:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NDg5NTRiYzMyNGY1Njg0YTgwZjU3NmVmNTZlNTczYTMyMzU3MWI2YzQ4NjkwNWY4Na3Gew==: --dhchap-ctrl-secret DHHC-1:03:NWI0OGQ5ODNhYTQ5NzU3ZGM3MzQwNzNiMjlkMjgzZDNlZDBlNmQ3ZDkzOGJjOGJmODc3NWRhYzllNWZkMmI5Y9IF2OU=: 00:19:27.128 20:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.128 20:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:27.128 20:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.128 20:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.128 20:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.128 20:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.128 20:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:27.129 20:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:27.386 20:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:27.386 20:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.386 20:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:27.386 20:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:27.386 20:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:27.386 20:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.386 20:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.386 20:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.387 20:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.387 20:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.387 20:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.387 20:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.644 00:19:27.644 20:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.644 20:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.644 20:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.209 20:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.209 20:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.210 20:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.210 20:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.210 20:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.210 20:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.210 { 00:19:28.210 "cntlid": 99, 00:19:28.210 "qid": 0, 00:19:28.210 "state": "enabled", 00:19:28.210 "thread": "nvmf_tgt_poll_group_000", 00:19:28.210 "listen_address": { 00:19:28.210 "trtype": "TCP", 00:19:28.210 "adrfam": "IPv4", 00:19:28.210 "traddr": "10.0.0.2", 00:19:28.210 "trsvcid": "4420" 00:19:28.210 }, 00:19:28.210 "peer_address": { 00:19:28.210 "trtype": "TCP", 00:19:28.210 "adrfam": "IPv4", 00:19:28.210 "traddr": "10.0.0.1", 00:19:28.210 "trsvcid": "49638" 00:19:28.210 }, 00:19:28.210 "auth": { 00:19:28.210 "state": "completed", 00:19:28.210 "digest": "sha512", 00:19:28.210 "dhgroup": "null" 00:19:28.210 } 00:19:28.210 } 00:19:28.210 ]' 00:19:28.210 20:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.210 20:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.210 20:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.210 20:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:28.210 20:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.479 20:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.479 20:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.479 20:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.759 20:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YzNiYzczMDM3NTY3ZDdhZjM1NjA1MmE1ZGE0NjBjMjBjrVcB: --dhchap-ctrl-secret DHHC-1:02:YjlmNGEyZDFmM2I5NzQ0YjdkZmRkOTk5OTUyMTU5YWZmMWI1MDNjNjQ4NWU0NDUyM69n7Q==: 00:19:30.132 20:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.132 20:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:30.132 20:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.132 20:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.132 20:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.132 20:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.132 20:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:30.132 20:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:30.391 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:30.391 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.391 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:30.391 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:30.391 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:30.391 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.391 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.391 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.391 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.391 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.391 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.391 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.648 00:19:30.648 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.648 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.648 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.214 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.214 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.214 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.214 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.214 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.214 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.214 { 00:19:31.214 "cntlid": 101, 00:19:31.214 "qid": 0, 00:19:31.214 "state": "enabled", 00:19:31.214 "thread": "nvmf_tgt_poll_group_000", 00:19:31.214 "listen_address": { 00:19:31.214 "trtype": "TCP", 00:19:31.214 "adrfam": "IPv4", 00:19:31.214 "traddr": "10.0.0.2", 00:19:31.214 "trsvcid": "4420" 00:19:31.214 }, 00:19:31.214 "peer_address": { 00:19:31.214 "trtype": "TCP", 00:19:31.214 "adrfam": "IPv4", 00:19:31.214 "traddr": "10.0.0.1", 00:19:31.214 "trsvcid": "49658" 00:19:31.214 }, 00:19:31.214 "auth": { 00:19:31.214 "state": "completed", 00:19:31.214 "digest": "sha512", 00:19:31.214 "dhgroup": "null" 00:19:31.214 } 00:19:31.214 } 00:19:31.214 ]' 00:19:31.214 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.214 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.214 20:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.471 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:31.471 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.471 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.471 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.471 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.730 20:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZWY0Y2MwZTZmYjgxYWE5ZThiMzdmNjg1OWZlNDMzM2UxNTIwNGI0MmQwYTIyMjhm84729A==: --dhchap-ctrl-secret DHHC-1:01:Y2RjNzc1YmFjZTdkZWEzNWIyY2YxZGMzY2E1ZjA0NTQg28rl: 00:19:33.103 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.361 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:33.361 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.361 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.361 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.361 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.361 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:33.361 20:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:33.620 20:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:33.620 20:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.620 20:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:33.620 20:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:33.620 20:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:33.620 20:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.620 20:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:33.620 20:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.620 20:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.620 20:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.620 20:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.620 20:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.877 00:19:34.136 20:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.136 20:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.136 20:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.393 20:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.393 20:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.393 20:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.393 20:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.393 20:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.393 20:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.393 { 00:19:34.393 "cntlid": 103, 00:19:34.393 "qid": 0, 00:19:34.393 "state": "enabled", 00:19:34.393 "thread": "nvmf_tgt_poll_group_000", 00:19:34.393 "listen_address": { 00:19:34.393 "trtype": "TCP", 00:19:34.393 "adrfam": "IPv4", 00:19:34.393 "traddr": "10.0.0.2", 00:19:34.393 "trsvcid": "4420" 00:19:34.393 }, 00:19:34.393 "peer_address": { 00:19:34.393 "trtype": "TCP", 00:19:34.393 "adrfam": "IPv4", 00:19:34.393 "traddr": "10.0.0.1", 00:19:34.393 "trsvcid": "49684" 00:19:34.393 }, 00:19:34.393 "auth": { 00:19:34.393 "state": "completed", 00:19:34.393 "digest": "sha512", 00:19:34.393 "dhgroup": "null" 00:19:34.393 } 00:19:34.393 } 00:19:34.393 ]' 00:19:34.393 20:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.393 20:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:34.393 20:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.393 20:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:34.393 20:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.651 20:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.651 20:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.651 20:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.908 20:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NjI5YzZmOTAxMDIxZjllOGRkZTEwYmI4MjBlYTY0Y2ZiMWIxY2M5Y2JmYzVmNjI3ZDNlNzU4NjEwMDJmYjRiMVpBpJQ=: 00:19:36.284 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.284 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:36.284 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.284 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.284 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.284 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.284 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.284 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:36.284 20:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:36.542 20:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:36.542 20:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.542 20:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:36.542 20:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:36.542 20:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:36.542 20:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.542 20:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.542 20:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.542 20:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.542 20:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.542 20:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.542 20:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.800 00:19:36.800 20:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.800 20:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.800 20:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.365 20:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.365 20:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.365 20:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.365 20:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.365 20:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.365 20:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.365 { 00:19:37.365 "cntlid": 105, 00:19:37.365 "qid": 0, 00:19:37.365 "state": "enabled", 00:19:37.365 "thread": "nvmf_tgt_poll_group_000", 00:19:37.365 "listen_address": { 00:19:37.365 "trtype": "TCP", 00:19:37.365 "adrfam": "IPv4", 00:19:37.365 "traddr": "10.0.0.2", 00:19:37.365 "trsvcid": "4420" 00:19:37.365 }, 00:19:37.365 "peer_address": { 00:19:37.365 "trtype": "TCP", 00:19:37.365 "adrfam": "IPv4", 00:19:37.365 "traddr": "10.0.0.1", 00:19:37.365 "trsvcid": "57104" 00:19:37.365 }, 00:19:37.365 "auth": { 00:19:37.365 "state": "completed", 00:19:37.365 "digest": "sha512", 00:19:37.365 "dhgroup": "ffdhe2048" 00:19:37.365 } 00:19:37.365 } 00:19:37.365 ]' 00:19:37.365 20:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.366 20:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:37.366 20:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.366 20:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:37.366 20:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.623 20:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.623 20:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.623 20:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.881 20:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NDg5NTRiYzMyNGY1Njg0YTgwZjU3NmVmNTZlNTczYTMyMzU3MWI2YzQ4NjkwNWY4Na3Gew==: --dhchap-ctrl-secret DHHC-1:03:NWI0OGQ5ODNhYTQ5NzU3ZGM3MzQwNzNiMjlkMjgzZDNlZDBlNmQ3ZDkzOGJjOGJmODc3NWRhYzllNWZkMmI5Y9IF2OU=: 00:19:38.814 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.072 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:39.072 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.072 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.072 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.072 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.072 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:39.072 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:39.330 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:39.330 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.330 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:39.330 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:39.330 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:39.330 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.330 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.330 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.330 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.330 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.330 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.330 20:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.588 00:19:39.588 20:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.588 20:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.588 20:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.153 20:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.153 20:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.153 20:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.153 20:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.153 20:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.153 20:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.153 { 00:19:40.153 "cntlid": 107, 00:19:40.153 "qid": 0, 00:19:40.153 "state": "enabled", 00:19:40.153 "thread": "nvmf_tgt_poll_group_000", 00:19:40.153 "listen_address": { 00:19:40.153 "trtype": "TCP", 00:19:40.154 "adrfam": "IPv4", 00:19:40.154 "traddr": "10.0.0.2", 00:19:40.154 "trsvcid": "4420" 00:19:40.154 }, 00:19:40.154 "peer_address": { 00:19:40.154 "trtype": "TCP", 00:19:40.154 "adrfam": "IPv4", 00:19:40.154 "traddr": "10.0.0.1", 00:19:40.154 "trsvcid": "57122" 00:19:40.154 }, 00:19:40.154 "auth": { 00:19:40.154 "state": "completed", 00:19:40.154 "digest": "sha512", 00:19:40.154 "dhgroup": "ffdhe2048" 00:19:40.154 } 00:19:40.154 } 00:19:40.154 ]' 00:19:40.154 20:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.411 20:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:40.411 20:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.411 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:40.411 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.411 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.411 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.411 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.975 20:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YzNiYzczMDM3NTY3ZDdhZjM1NjA1MmE1ZGE0NjBjMjBjrVcB: --dhchap-ctrl-secret DHHC-1:02:YjlmNGEyZDFmM2I5NzQ0YjdkZmRkOTk5OTUyMTU5YWZmMWI1MDNjNjQ4NWU0NDUyM69n7Q==: 00:19:41.908 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.908 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:41.908 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.908 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.908 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.908 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.908 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:41.908 20:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:42.474 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:42.474 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.474 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:42.474 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:42.474 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:42.474 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.474 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.474 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.474 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.474 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.474 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.474 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.741 00:19:42.741 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.741 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.741 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.029 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.029 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.029 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.029 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.029 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.029 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.029 { 00:19:43.029 "cntlid": 109, 00:19:43.029 "qid": 0, 00:19:43.029 "state": "enabled", 00:19:43.029 "thread": "nvmf_tgt_poll_group_000", 00:19:43.029 "listen_address": { 00:19:43.029 "trtype": "TCP", 00:19:43.029 "adrfam": "IPv4", 00:19:43.029 "traddr": "10.0.0.2", 00:19:43.029 "trsvcid": "4420" 00:19:43.029 }, 00:19:43.029 "peer_address": { 00:19:43.029 "trtype": "TCP", 00:19:43.029 "adrfam": "IPv4", 00:19:43.029 "traddr": "10.0.0.1", 00:19:43.029 "trsvcid": "57146" 00:19:43.029 }, 00:19:43.029 "auth": { 00:19:43.029 "state": "completed", 00:19:43.029 "digest": "sha512", 00:19:43.029 "dhgroup": "ffdhe2048" 00:19:43.029 } 00:19:43.029 } 00:19:43.029 ]' 00:19:43.029 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.029 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.029 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.296 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:43.296 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.296 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.296 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.296 20:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.554 20:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZWY0Y2MwZTZmYjgxYWE5ZThiMzdmNjg1OWZlNDMzM2UxNTIwNGI0MmQwYTIyMjhm84729A==: --dhchap-ctrl-secret DHHC-1:01:Y2RjNzc1YmFjZTdkZWEzNWIyY2YxZGMzY2E1ZjA0NTQg28rl: 00:19:44.927 20:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.927 20:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:44.927 20:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.927 20:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.927 20:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.927 20:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.927 20:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:44.927 20:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:45.493 20:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:45.493 20:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.493 20:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:45.493 20:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:45.493 20:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:45.493 20:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.493 20:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:45.493 20:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.493 20:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.493 20:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.493 20:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.494 20:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.059 00:19:46.059 20:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.059 20:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.059 20:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.317 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.317 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.317 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.317 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.317 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.317 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.317 { 00:19:46.317 "cntlid": 111, 00:19:46.317 "qid": 0, 00:19:46.317 "state": "enabled", 00:19:46.317 "thread": "nvmf_tgt_poll_group_000", 00:19:46.317 "listen_address": { 00:19:46.317 "trtype": "TCP", 00:19:46.317 "adrfam": "IPv4", 00:19:46.317 "traddr": "10.0.0.2", 00:19:46.317 "trsvcid": "4420" 00:19:46.317 }, 00:19:46.317 "peer_address": { 00:19:46.317 "trtype": "TCP", 00:19:46.317 "adrfam": "IPv4", 00:19:46.317 "traddr": "10.0.0.1", 00:19:46.317 "trsvcid": "48836" 00:19:46.317 }, 00:19:46.317 "auth": { 00:19:46.317 "state": "completed", 00:19:46.317 "digest": "sha512", 00:19:46.317 "dhgroup": "ffdhe2048" 00:19:46.317 } 00:19:46.317 } 00:19:46.317 ]' 00:19:46.317 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.317 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.317 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.574 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:46.574 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.574 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.574 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.574 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.845 20:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NjI5YzZmOTAxMDIxZjllOGRkZTEwYmI4MjBlYTY0Y2ZiMWIxY2M5Y2JmYzVmNjI3ZDNlNzU4NjEwMDJmYjRiMVpBpJQ=: 00:19:48.219 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.219 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:48.219 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.219 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.219 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.219 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.219 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.219 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:48.219 20:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:48.477 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:48.477 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.477 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:48.477 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:48.477 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:48.477 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.477 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.477 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.477 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.477 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.477 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.477 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.735 00:19:48.735 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.735 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.735 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.301 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.301 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.301 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.301 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.301 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.301 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.301 { 00:19:49.301 "cntlid": 113, 00:19:49.301 "qid": 0, 00:19:49.301 "state": "enabled", 00:19:49.301 "thread": "nvmf_tgt_poll_group_000", 00:19:49.301 "listen_address": { 00:19:49.301 "trtype": "TCP", 00:19:49.301 "adrfam": "IPv4", 00:19:49.301 "traddr": "10.0.0.2", 00:19:49.301 "trsvcid": "4420" 00:19:49.301 }, 00:19:49.301 "peer_address": { 00:19:49.301 "trtype": "TCP", 00:19:49.301 "adrfam": "IPv4", 00:19:49.301 "traddr": "10.0.0.1", 00:19:49.301 "trsvcid": "48850" 00:19:49.301 }, 00:19:49.301 "auth": { 00:19:49.301 "state": "completed", 00:19:49.301 "digest": "sha512", 00:19:49.301 "dhgroup": "ffdhe3072" 00:19:49.301 } 00:19:49.301 } 00:19:49.301 ]' 00:19:49.301 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.301 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:49.301 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.301 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:49.301 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.301 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.301 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.301 20:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.559 20:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NDg5NTRiYzMyNGY1Njg0YTgwZjU3NmVmNTZlNTczYTMyMzU3MWI2YzQ4NjkwNWY4Na3Gew==: --dhchap-ctrl-secret DHHC-1:03:NWI0OGQ5ODNhYTQ5NzU3ZGM3MzQwNzNiMjlkMjgzZDNlZDBlNmQ3ZDkzOGJjOGJmODc3NWRhYzllNWZkMmI5Y9IF2OU=: 00:19:50.943 20:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.943 20:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:50.943 20:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.943 20:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.943 20:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.943 20:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.943 20:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:50.943 20:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:51.201 20:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:51.201 20:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.201 20:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:51.201 20:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:51.201 20:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:51.201 20:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.201 20:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.201 20:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.201 20:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.201 20:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.201 20:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.201 20:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.767 00:19:51.767 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.767 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.767 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.025 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.025 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.025 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.025 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.025 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.025 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.025 { 00:19:52.025 "cntlid": 115, 00:19:52.025 "qid": 0, 00:19:52.025 "state": "enabled", 00:19:52.025 "thread": "nvmf_tgt_poll_group_000", 00:19:52.025 "listen_address": { 00:19:52.025 "trtype": "TCP", 00:19:52.025 "adrfam": "IPv4", 00:19:52.025 "traddr": "10.0.0.2", 00:19:52.025 "trsvcid": "4420" 00:19:52.025 }, 00:19:52.025 "peer_address": { 00:19:52.025 "trtype": "TCP", 00:19:52.025 "adrfam": "IPv4", 00:19:52.025 "traddr": "10.0.0.1", 00:19:52.025 "trsvcid": "48880" 00:19:52.025 }, 00:19:52.025 "auth": { 00:19:52.025 "state": "completed", 00:19:52.025 "digest": "sha512", 00:19:52.025 "dhgroup": "ffdhe3072" 00:19:52.025 } 00:19:52.025 } 00:19:52.025 ]' 00:19:52.025 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.025 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.025 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.283 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:52.283 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.283 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.283 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.283 20:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.848 20:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YzNiYzczMDM3NTY3ZDdhZjM1NjA1MmE1ZGE0NjBjMjBjrVcB: --dhchap-ctrl-secret DHHC-1:02:YjlmNGEyZDFmM2I5NzQ0YjdkZmRkOTk5OTUyMTU5YWZmMWI1MDNjNjQ4NWU0NDUyM69n7Q==: 00:19:54.220 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.220 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:54.220 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.220 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.220 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.220 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.220 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:54.220 20:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:54.479 20:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:54.479 20:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.479 20:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:54.479 20:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:54.479 20:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:54.479 20:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.479 20:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.479 20:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.479 20:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.479 20:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.479 20:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.479 20:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.412 00:19:55.412 20:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.412 20:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.412 20:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.670 20:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.670 20:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.670 20:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.670 20:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.670 20:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.670 20:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.670 { 00:19:55.670 "cntlid": 117, 00:19:55.670 "qid": 0, 00:19:55.670 "state": "enabled", 00:19:55.670 "thread": "nvmf_tgt_poll_group_000", 00:19:55.670 "listen_address": { 00:19:55.670 "trtype": "TCP", 00:19:55.670 "adrfam": "IPv4", 00:19:55.670 "traddr": "10.0.0.2", 00:19:55.670 "trsvcid": "4420" 00:19:55.670 }, 00:19:55.670 "peer_address": { 00:19:55.670 "trtype": "TCP", 00:19:55.670 "adrfam": "IPv4", 00:19:55.670 "traddr": "10.0.0.1", 00:19:55.670 "trsvcid": "48900" 00:19:55.670 }, 00:19:55.670 "auth": { 00:19:55.670 "state": "completed", 00:19:55.670 "digest": "sha512", 00:19:55.670 "dhgroup": "ffdhe3072" 00:19:55.670 } 00:19:55.670 } 00:19:55.670 ]' 00:19:55.670 20:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.670 20:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:55.670 20:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.928 20:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:55.928 20:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.928 20:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.928 20:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.928 20:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.186 20:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZWY0Y2MwZTZmYjgxYWE5ZThiMzdmNjg1OWZlNDMzM2UxNTIwNGI0MmQwYTIyMjhm84729A==: --dhchap-ctrl-secret DHHC-1:01:Y2RjNzc1YmFjZTdkZWEzNWIyY2YxZGMzY2E1ZjA0NTQg28rl: 00:19:57.563 20:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.563 20:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:57.563 20:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.563 20:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.563 20:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.563 20:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.563 20:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:57.563 20:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:57.843 20:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:57.843 20:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.843 20:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:57.843 20:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:57.843 20:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:57.843 20:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.843 20:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:57.843 20:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.843 20:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.843 20:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.843 20:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:57.843 20:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:58.418 00:19:58.418 20:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.418 20:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.418 20:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.676 20:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.676 20:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.676 20:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.676 20:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.676 20:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.676 20:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.676 { 00:19:58.676 "cntlid": 119, 00:19:58.676 "qid": 0, 00:19:58.676 "state": "enabled", 00:19:58.676 "thread": "nvmf_tgt_poll_group_000", 00:19:58.676 "listen_address": { 00:19:58.676 "trtype": "TCP", 00:19:58.676 "adrfam": "IPv4", 00:19:58.676 "traddr": "10.0.0.2", 00:19:58.676 "trsvcid": "4420" 00:19:58.676 }, 00:19:58.676 "peer_address": { 00:19:58.676 "trtype": "TCP", 00:19:58.676 "adrfam": "IPv4", 00:19:58.676 "traddr": "10.0.0.1", 00:19:58.676 "trsvcid": "54646" 00:19:58.676 }, 00:19:58.676 "auth": { 00:19:58.676 "state": "completed", 00:19:58.676 "digest": "sha512", 00:19:58.676 "dhgroup": "ffdhe3072" 00:19:58.676 } 00:19:58.676 } 00:19:58.676 ]' 00:19:58.676 20:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.676 20:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:58.676 20:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.676 20:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:58.676 20:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.933 20:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.933 20:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.933 20:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.191 20:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NjI5YzZmOTAxMDIxZjllOGRkZTEwYmI4MjBlYTY0Y2ZiMWIxY2M5Y2JmYzVmNjI3ZDNlNzU4NjEwMDJmYjRiMVpBpJQ=: 00:20:00.573 20:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.573 20:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:00.573 20:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.573 20:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.573 20:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.573 20:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:00.573 20:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.573 20:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:00.573 20:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:00.831 20:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:00.831 20:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.831 20:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:00.831 20:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:00.831 20:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:00.831 20:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.831 20:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.831 20:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.831 20:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.831 20:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.831 20:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.831 20:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.397 00:20:01.397 20:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.397 20:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.397 20:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.962 20:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.962 20:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.962 20:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.962 20:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.962 20:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.962 20:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.962 { 00:20:01.962 "cntlid": 121, 00:20:01.962 "qid": 0, 00:20:01.962 "state": "enabled", 00:20:01.962 "thread": "nvmf_tgt_poll_group_000", 00:20:01.962 "listen_address": { 00:20:01.962 "trtype": "TCP", 00:20:01.962 "adrfam": "IPv4", 00:20:01.962 "traddr": "10.0.0.2", 00:20:01.962 "trsvcid": "4420" 00:20:01.962 }, 00:20:01.962 "peer_address": { 00:20:01.962 "trtype": "TCP", 00:20:01.962 "adrfam": "IPv4", 00:20:01.962 "traddr": "10.0.0.1", 00:20:01.962 "trsvcid": "54678" 00:20:01.962 }, 00:20:01.962 "auth": { 00:20:01.962 "state": "completed", 00:20:01.962 "digest": "sha512", 00:20:01.962 "dhgroup": "ffdhe4096" 00:20:01.962 } 00:20:01.962 } 00:20:01.962 ]' 00:20:01.962 20:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.962 20:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:01.962 20:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.962 20:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:01.962 20:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.962 20:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.962 20:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.962 20:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.220 20:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NDg5NTRiYzMyNGY1Njg0YTgwZjU3NmVmNTZlNTczYTMyMzU3MWI2YzQ4NjkwNWY4Na3Gew==: --dhchap-ctrl-secret DHHC-1:03:NWI0OGQ5ODNhYTQ5NzU3ZGM3MzQwNzNiMjlkMjgzZDNlZDBlNmQ3ZDkzOGJjOGJmODc3NWRhYzllNWZkMmI5Y9IF2OU=: 00:20:03.593 20:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.593 20:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:03.593 20:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.593 20:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.593 20:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.593 20:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.593 20:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:03.593 20:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:03.851 20:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:03.851 20:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.851 20:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:03.851 20:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:03.851 20:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:03.851 20:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.851 20:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.851 20:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.851 20:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.851 20:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.851 20:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.851 20:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.416 00:20:04.416 20:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.416 20:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.416 20:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.982 20:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.982 20:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.982 20:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.982 20:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.982 20:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.982 20:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.982 { 00:20:04.982 "cntlid": 123, 00:20:04.982 "qid": 0, 00:20:04.982 "state": "enabled", 00:20:04.982 "thread": "nvmf_tgt_poll_group_000", 00:20:04.982 "listen_address": { 00:20:04.982 "trtype": "TCP", 00:20:04.982 "adrfam": "IPv4", 00:20:04.982 "traddr": "10.0.0.2", 00:20:04.982 "trsvcid": "4420" 00:20:04.982 }, 00:20:04.982 "peer_address": { 00:20:04.982 "trtype": "TCP", 00:20:04.982 "adrfam": "IPv4", 00:20:04.982 "traddr": "10.0.0.1", 00:20:04.982 "trsvcid": "54712" 00:20:04.982 }, 00:20:04.982 "auth": { 00:20:04.982 "state": "completed", 00:20:04.982 "digest": "sha512", 00:20:04.982 "dhgroup": "ffdhe4096" 00:20:04.982 } 00:20:04.982 } 00:20:04.982 ]' 00:20:04.982 20:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.982 20:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:04.982 20:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.982 20:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:04.982 20:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.240 20:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.240 20:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.240 20:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.497 20:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YzNiYzczMDM3NTY3ZDdhZjM1NjA1MmE1ZGE0NjBjMjBjrVcB: --dhchap-ctrl-secret DHHC-1:02:YjlmNGEyZDFmM2I5NzQ0YjdkZmRkOTk5OTUyMTU5YWZmMWI1MDNjNjQ4NWU0NDUyM69n7Q==: 00:20:06.872 20:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.872 20:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:06.872 20:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.872 20:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.872 20:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.872 20:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.872 20:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:06.872 20:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:07.130 20:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:20:07.130 20:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.130 20:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:07.130 20:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:07.130 20:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:07.130 20:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.130 20:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.130 20:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.130 20:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.130 20:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.130 20:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.130 20:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.695 00:20:07.695 20:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.695 20:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.695 20:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.260 20:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.260 20:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.260 20:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.260 20:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.260 20:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.260 20:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.260 { 00:20:08.260 "cntlid": 125, 00:20:08.260 "qid": 0, 00:20:08.260 "state": "enabled", 00:20:08.260 "thread": "nvmf_tgt_poll_group_000", 00:20:08.260 "listen_address": { 00:20:08.260 "trtype": "TCP", 00:20:08.260 "adrfam": "IPv4", 00:20:08.260 "traddr": "10.0.0.2", 00:20:08.260 "trsvcid": "4420" 00:20:08.260 }, 00:20:08.260 "peer_address": { 00:20:08.260 "trtype": "TCP", 00:20:08.260 "adrfam": "IPv4", 00:20:08.260 "traddr": "10.0.0.1", 00:20:08.260 "trsvcid": "34366" 00:20:08.260 }, 00:20:08.260 "auth": { 00:20:08.260 "state": "completed", 00:20:08.260 "digest": "sha512", 00:20:08.260 "dhgroup": "ffdhe4096" 00:20:08.260 } 00:20:08.260 } 00:20:08.260 ]' 00:20:08.260 20:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.260 20:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:08.260 20:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.260 20:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:08.260 20:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.260 20:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.260 20:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.260 20:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.826 20:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZWY0Y2MwZTZmYjgxYWE5ZThiMzdmNjg1OWZlNDMzM2UxNTIwNGI0MmQwYTIyMjhm84729A==: --dhchap-ctrl-secret DHHC-1:01:Y2RjNzc1YmFjZTdkZWEzNWIyY2YxZGMzY2E1ZjA0NTQg28rl: 00:20:09.759 20:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.759 20:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:09.759 20:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.759 20:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.759 20:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.759 20:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.759 20:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:09.759 20:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:10.324 20:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:20:10.324 20:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.324 20:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:10.324 20:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:10.324 20:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:10.324 20:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.324 20:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:10.324 20:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.324 20:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.324 20:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.324 20:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.324 20:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.890 00:20:10.890 20:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.890 20:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.890 20:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.148 20:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.148 20:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.148 20:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.148 20:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.148 20:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.148 20:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.148 { 00:20:11.148 "cntlid": 127, 00:20:11.148 "qid": 0, 00:20:11.148 "state": "enabled", 00:20:11.148 "thread": "nvmf_tgt_poll_group_000", 00:20:11.148 "listen_address": { 00:20:11.148 "trtype": "TCP", 00:20:11.148 "adrfam": "IPv4", 00:20:11.148 "traddr": "10.0.0.2", 00:20:11.148 "trsvcid": "4420" 00:20:11.148 }, 00:20:11.148 "peer_address": { 00:20:11.148 "trtype": "TCP", 00:20:11.148 "adrfam": "IPv4", 00:20:11.148 "traddr": "10.0.0.1", 00:20:11.148 "trsvcid": "34378" 00:20:11.148 }, 00:20:11.148 "auth": { 00:20:11.148 "state": "completed", 00:20:11.148 "digest": "sha512", 00:20:11.148 "dhgroup": "ffdhe4096" 00:20:11.148 } 00:20:11.148 } 00:20:11.148 ]' 00:20:11.148 20:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.148 20:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:11.148 20:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.406 20:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:11.406 20:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.406 20:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.406 20:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.406 20:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.663 20:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NjI5YzZmOTAxMDIxZjllOGRkZTEwYmI4MjBlYTY0Y2ZiMWIxY2M5Y2JmYzVmNjI3ZDNlNzU4NjEwMDJmYjRiMVpBpJQ=: 00:20:13.069 20:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.070 20:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:13.070 20:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.070 20:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.070 20:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.070 20:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.070 20:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.070 20:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:13.070 20:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:13.634 20:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:20:13.634 20:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.634 20:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:13.634 20:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:13.634 20:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:13.634 20:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.634 20:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.634 20:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.634 20:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.634 20:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.634 20:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.634 20:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.566 00:20:14.566 20:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.566 20:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.566 20:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.822 20:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.822 20:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.822 20:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.822 20:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.822 20:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.822 20:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.822 { 00:20:14.822 "cntlid": 129, 00:20:14.822 "qid": 0, 00:20:14.822 "state": "enabled", 00:20:14.822 "thread": "nvmf_tgt_poll_group_000", 00:20:14.822 "listen_address": { 00:20:14.822 "trtype": "TCP", 00:20:14.822 "adrfam": "IPv4", 00:20:14.822 "traddr": "10.0.0.2", 00:20:14.822 "trsvcid": "4420" 00:20:14.822 }, 00:20:14.822 "peer_address": { 00:20:14.822 "trtype": "TCP", 00:20:14.822 "adrfam": "IPv4", 00:20:14.822 "traddr": "10.0.0.1", 00:20:14.822 "trsvcid": "34418" 00:20:14.822 }, 00:20:14.822 "auth": { 00:20:14.822 "state": "completed", 00:20:14.822 "digest": "sha512", 00:20:14.822 "dhgroup": "ffdhe6144" 00:20:14.822 } 00:20:14.822 } 00:20:14.822 ]' 00:20:14.822 20:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.822 20:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:14.822 20:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.822 20:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:14.822 20:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.822 20:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.822 20:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.822 20:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.387 20:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NDg5NTRiYzMyNGY1Njg0YTgwZjU3NmVmNTZlNTczYTMyMzU3MWI2YzQ4NjkwNWY4Na3Gew==: --dhchap-ctrl-secret DHHC-1:03:NWI0OGQ5ODNhYTQ5NzU3ZGM3MzQwNzNiMjlkMjgzZDNlZDBlNmQ3ZDkzOGJjOGJmODc3NWRhYzllNWZkMmI5Y9IF2OU=: 00:20:16.758 20:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.758 20:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:16.758 20:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.758 20:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.758 20:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.758 20:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.758 20:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:16.758 20:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:16.758 20:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:20:16.758 20:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.758 20:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:16.758 20:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:16.758 20:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:16.758 20:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.758 20:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.758 20:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.758 20:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.758 20:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.758 20:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.758 20:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.689 00:20:17.689 20:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.689 20:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.689 20:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.947 20:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.947 20:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.947 20:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.947 20:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.947 20:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.947 20:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.947 { 00:20:17.947 "cntlid": 131, 00:20:17.947 "qid": 0, 00:20:17.947 "state": "enabled", 00:20:17.947 "thread": "nvmf_tgt_poll_group_000", 00:20:17.947 "listen_address": { 00:20:17.947 "trtype": "TCP", 00:20:17.947 "adrfam": "IPv4", 00:20:17.947 "traddr": "10.0.0.2", 00:20:17.947 "trsvcid": "4420" 00:20:17.947 }, 00:20:17.947 "peer_address": { 00:20:17.947 "trtype": "TCP", 00:20:17.947 "adrfam": "IPv4", 00:20:17.947 "traddr": "10.0.0.1", 00:20:17.947 "trsvcid": "33918" 00:20:17.947 }, 00:20:17.947 "auth": { 00:20:17.947 "state": "completed", 00:20:17.947 "digest": "sha512", 00:20:17.947 "dhgroup": "ffdhe6144" 00:20:17.947 } 00:20:17.947 } 00:20:17.947 ]' 00:20:17.947 20:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.947 20:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:17.947 20:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.947 20:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:17.947 20:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.947 20:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.947 20:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.947 20:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.511 20:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YzNiYzczMDM3NTY3ZDdhZjM1NjA1MmE1ZGE0NjBjMjBjrVcB: --dhchap-ctrl-secret DHHC-1:02:YjlmNGEyZDFmM2I5NzQ0YjdkZmRkOTk5OTUyMTU5YWZmMWI1MDNjNjQ4NWU0NDUyM69n7Q==: 00:20:19.443 20:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.443 20:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:19.443 20:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.443 20:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.443 20:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.443 20:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.443 20:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:19.443 20:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:20.008 20:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:20:20.008 20:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.008 20:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:20.008 20:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:20.008 20:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:20.008 20:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.008 20:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.008 20:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.008 20:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.008 20:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.008 20:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.008 20:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.573 00:20:20.573 20:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.573 20:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.573 20:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.139 20:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.139 20:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.139 20:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.139 20:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.139 20:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.139 20:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.139 { 00:20:21.139 "cntlid": 133, 00:20:21.139 "qid": 0, 00:20:21.139 "state": "enabled", 00:20:21.139 "thread": "nvmf_tgt_poll_group_000", 00:20:21.139 "listen_address": { 00:20:21.139 "trtype": "TCP", 00:20:21.139 "adrfam": "IPv4", 00:20:21.139 "traddr": "10.0.0.2", 00:20:21.139 "trsvcid": "4420" 00:20:21.139 }, 00:20:21.139 "peer_address": { 00:20:21.139 "trtype": "TCP", 00:20:21.139 "adrfam": "IPv4", 00:20:21.139 "traddr": "10.0.0.1", 00:20:21.139 "trsvcid": "33942" 00:20:21.139 }, 00:20:21.139 "auth": { 00:20:21.139 "state": "completed", 00:20:21.139 "digest": "sha512", 00:20:21.139 "dhgroup": "ffdhe6144" 00:20:21.139 } 00:20:21.139 } 00:20:21.139 ]' 00:20:21.139 20:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.139 20:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:21.139 20:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.139 20:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:21.139 20:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.139 20:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.139 20:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.139 20:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.397 20:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZWY0Y2MwZTZmYjgxYWE5ZThiMzdmNjg1OWZlNDMzM2UxNTIwNGI0MmQwYTIyMjhm84729A==: --dhchap-ctrl-secret DHHC-1:01:Y2RjNzc1YmFjZTdkZWEzNWIyY2YxZGMzY2E1ZjA0NTQg28rl: 00:20:22.769 20:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.769 20:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:22.769 20:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.769 20:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.769 20:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.769 20:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.769 20:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:22.769 20:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:23.027 20:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:20:23.027 20:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.027 20:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:23.027 20:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:23.027 20:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:23.027 20:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.027 20:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:23.027 20:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.027 20:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.027 20:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.027 20:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:23.027 20:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:23.959 00:20:23.959 20:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.959 20:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.959 20:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.217 20:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.217 20:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.217 20:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.217 20:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.217 20:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.217 20:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.217 { 00:20:24.217 "cntlid": 135, 00:20:24.217 "qid": 0, 00:20:24.217 "state": "enabled", 00:20:24.217 "thread": "nvmf_tgt_poll_group_000", 00:20:24.217 "listen_address": { 00:20:24.217 "trtype": "TCP", 00:20:24.217 "adrfam": "IPv4", 00:20:24.217 "traddr": "10.0.0.2", 00:20:24.217 "trsvcid": "4420" 00:20:24.217 }, 00:20:24.217 "peer_address": { 00:20:24.217 "trtype": "TCP", 00:20:24.217 "adrfam": "IPv4", 00:20:24.217 "traddr": "10.0.0.1", 00:20:24.217 "trsvcid": "33972" 00:20:24.217 }, 00:20:24.217 "auth": { 00:20:24.217 "state": "completed", 00:20:24.217 "digest": "sha512", 00:20:24.217 "dhgroup": "ffdhe6144" 00:20:24.217 } 00:20:24.217 } 00:20:24.217 ]' 00:20:24.217 20:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.217 20:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:24.217 20:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.217 20:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:24.217 20:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.217 20:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.217 20:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.217 20:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.783 20:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NjI5YzZmOTAxMDIxZjllOGRkZTEwYmI4MjBlYTY0Y2ZiMWIxY2M5Y2JmYzVmNjI3ZDNlNzU4NjEwMDJmYjRiMVpBpJQ=: 00:20:26.155 20:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.155 20:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:26.155 20:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.155 20:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.155 20:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.155 20:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.155 20:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.155 20:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:26.155 20:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:26.721 20:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:20:26.721 20:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.721 20:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:26.721 20:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:26.721 20:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:26.721 20:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.721 20:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.721 20:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.721 20:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.721 20:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.721 20:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.721 20:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.696 00:20:27.696 20:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.696 20:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.696 20:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.953 20:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.953 20:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.953 20:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.953 20:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.211 20:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.211 20:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.211 { 00:20:28.211 "cntlid": 137, 00:20:28.211 "qid": 0, 00:20:28.211 "state": "enabled", 00:20:28.211 "thread": "nvmf_tgt_poll_group_000", 00:20:28.211 "listen_address": { 00:20:28.211 "trtype": "TCP", 00:20:28.211 "adrfam": "IPv4", 00:20:28.211 "traddr": "10.0.0.2", 00:20:28.211 "trsvcid": "4420" 00:20:28.211 }, 00:20:28.211 "peer_address": { 00:20:28.211 "trtype": "TCP", 00:20:28.211 "adrfam": "IPv4", 00:20:28.211 "traddr": "10.0.0.1", 00:20:28.211 "trsvcid": "49302" 00:20:28.211 }, 00:20:28.211 "auth": { 00:20:28.211 "state": "completed", 00:20:28.211 "digest": "sha512", 00:20:28.211 "dhgroup": "ffdhe8192" 00:20:28.211 } 00:20:28.211 } 00:20:28.211 ]' 00:20:28.211 20:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.211 20:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:28.211 20:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.211 20:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:28.211 20:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.211 20:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.211 20:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.211 20:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.777 20:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NDg5NTRiYzMyNGY1Njg0YTgwZjU3NmVmNTZlNTczYTMyMzU3MWI2YzQ4NjkwNWY4Na3Gew==: --dhchap-ctrl-secret DHHC-1:03:NWI0OGQ5ODNhYTQ5NzU3ZGM3MzQwNzNiMjlkMjgzZDNlZDBlNmQ3ZDkzOGJjOGJmODc3NWRhYzllNWZkMmI5Y9IF2OU=: 00:20:30.151 20:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.151 20:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:30.151 20:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.151 20:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.151 20:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.151 20:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.151 20:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:30.151 20:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:30.151 20:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:20:30.151 20:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.151 20:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:30.151 20:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:30.151 20:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:30.151 20:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.151 20:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.151 20:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.151 20:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.151 20:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.151 20:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.151 20:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.525 00:20:31.525 20:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.525 20:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.525 20:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.090 20:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.090 20:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.090 20:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.090 20:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.090 20:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.090 20:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.090 { 00:20:32.090 "cntlid": 139, 00:20:32.090 "qid": 0, 00:20:32.090 "state": "enabled", 00:20:32.090 "thread": "nvmf_tgt_poll_group_000", 00:20:32.090 "listen_address": { 00:20:32.090 "trtype": "TCP", 00:20:32.090 "adrfam": "IPv4", 00:20:32.090 "traddr": "10.0.0.2", 00:20:32.090 "trsvcid": "4420" 00:20:32.090 }, 00:20:32.090 "peer_address": { 00:20:32.090 "trtype": "TCP", 00:20:32.090 "adrfam": "IPv4", 00:20:32.090 "traddr": "10.0.0.1", 00:20:32.090 "trsvcid": "49330" 00:20:32.090 }, 00:20:32.090 "auth": { 00:20:32.090 "state": "completed", 00:20:32.090 "digest": "sha512", 00:20:32.090 "dhgroup": "ffdhe8192" 00:20:32.090 } 00:20:32.090 } 00:20:32.090 ]' 00:20:32.090 20:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.090 20:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:32.090 20:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.090 20:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:32.091 20:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.091 20:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.091 20:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.091 20:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.656 20:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YzNiYzczMDM3NTY3ZDdhZjM1NjA1MmE1ZGE0NjBjMjBjrVcB: --dhchap-ctrl-secret DHHC-1:02:YjlmNGEyZDFmM2I5NzQ0YjdkZmRkOTk5OTUyMTU5YWZmMWI1MDNjNjQ4NWU0NDUyM69n7Q==: 00:20:34.029 20:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.029 20:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:34.029 20:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.029 20:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.029 20:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.029 20:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.029 20:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:34.029 20:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:34.288 20:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:20:34.288 20:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.288 20:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:34.288 20:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:34.288 20:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:34.288 20:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.288 20:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.288 20:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.288 20:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.288 20:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.288 20:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.288 20:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.220 00:20:35.220 20:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.220 20:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.220 20:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.786 20:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.786 20:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.786 20:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.786 20:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.786 20:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.786 20:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.786 { 00:20:35.786 "cntlid": 141, 00:20:35.786 "qid": 0, 00:20:35.786 "state": "enabled", 00:20:35.786 "thread": "nvmf_tgt_poll_group_000", 00:20:35.786 "listen_address": { 00:20:35.786 "trtype": "TCP", 00:20:35.786 "adrfam": "IPv4", 00:20:35.786 "traddr": "10.0.0.2", 00:20:35.786 "trsvcid": "4420" 00:20:35.786 }, 00:20:35.786 "peer_address": { 00:20:35.786 "trtype": "TCP", 00:20:35.786 "adrfam": "IPv4", 00:20:35.786 "traddr": "10.0.0.1", 00:20:35.786 "trsvcid": "49350" 00:20:35.786 }, 00:20:35.786 "auth": { 00:20:35.786 "state": "completed", 00:20:35.786 "digest": "sha512", 00:20:35.786 "dhgroup": "ffdhe8192" 00:20:35.786 } 00:20:35.786 } 00:20:35.786 ]' 00:20:35.786 20:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.786 20:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.786 20:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.786 20:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:35.786 20:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.044 20:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.044 20:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.044 20:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.303 20:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZWY0Y2MwZTZmYjgxYWE5ZThiMzdmNjg1OWZlNDMzM2UxNTIwNGI0MmQwYTIyMjhm84729A==: --dhchap-ctrl-secret DHHC-1:01:Y2RjNzc1YmFjZTdkZWEzNWIyY2YxZGMzY2E1ZjA0NTQg28rl: 00:20:37.676 20:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.676 20:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:37.676 20:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.676 20:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.676 20:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.676 20:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.676 20:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:37.676 20:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:38.242 20:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:20:38.242 20:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.242 20:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:38.242 20:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:38.242 20:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:38.242 20:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.242 20:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:38.242 20:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.242 20:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.242 20:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.242 20:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:38.242 20:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:39.614 00:20:39.614 20:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.614 20:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.614 20:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.872 20:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.872 20:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.872 20:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.872 20:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.872 20:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.872 20:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.872 { 00:20:39.872 "cntlid": 143, 00:20:39.872 "qid": 0, 00:20:39.872 "state": "enabled", 00:20:39.872 "thread": "nvmf_tgt_poll_group_000", 00:20:39.872 "listen_address": { 00:20:39.872 "trtype": "TCP", 00:20:39.872 "adrfam": "IPv4", 00:20:39.872 "traddr": "10.0.0.2", 00:20:39.872 "trsvcid": "4420" 00:20:39.872 }, 00:20:39.872 "peer_address": { 00:20:39.872 "trtype": "TCP", 00:20:39.872 "adrfam": "IPv4", 00:20:39.872 "traddr": "10.0.0.1", 00:20:39.872 "trsvcid": "51214" 00:20:39.872 }, 00:20:39.872 "auth": { 00:20:39.872 "state": "completed", 00:20:39.872 "digest": "sha512", 00:20:39.872 "dhgroup": "ffdhe8192" 00:20:39.872 } 00:20:39.872 } 00:20:39.872 ]' 00:20:39.872 20:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.872 20:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.872 20:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.129 20:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:40.129 20:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.129 20:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.129 20:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.129 20:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.388 20:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NjI5YzZmOTAxMDIxZjllOGRkZTEwYmI4MjBlYTY0Y2ZiMWIxY2M5Y2JmYzVmNjI3ZDNlNzU4NjEwMDJmYjRiMVpBpJQ=: 00:20:41.759 20:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.759 20:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:41.759 20:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.759 20:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.759 20:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.759 20:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:41.759 20:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:41.759 20:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:41.759 20:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:41.759 20:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:41.759 20:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:41.759 20:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:41.759 20:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.759 20:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:41.759 20:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:41.759 20:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:41.759 20:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.759 20:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.759 20:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.759 20:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.017 20:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.017 20:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.017 20:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.986 00:20:42.986 20:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.986 20:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.986 20:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.556 20:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.556 20:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.556 20:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.556 20:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.556 20:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.556 20:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.556 { 00:20:43.556 "cntlid": 145, 00:20:43.556 "qid": 0, 00:20:43.556 "state": "enabled", 00:20:43.556 "thread": "nvmf_tgt_poll_group_000", 00:20:43.556 "listen_address": { 00:20:43.556 "trtype": "TCP", 00:20:43.556 "adrfam": "IPv4", 00:20:43.556 "traddr": "10.0.0.2", 00:20:43.556 "trsvcid": "4420" 00:20:43.556 }, 00:20:43.556 "peer_address": { 00:20:43.556 "trtype": "TCP", 00:20:43.556 "adrfam": "IPv4", 00:20:43.556 "traddr": "10.0.0.1", 00:20:43.556 "trsvcid": "51250" 00:20:43.556 }, 00:20:43.556 "auth": { 00:20:43.556 "state": "completed", 00:20:43.556 "digest": "sha512", 00:20:43.556 "dhgroup": "ffdhe8192" 00:20:43.556 } 00:20:43.556 } 00:20:43.556 ]' 00:20:43.556 20:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.556 20:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:43.556 20:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.556 20:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:43.556 20:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.556 20:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.556 20:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.556 20:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.814 20:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NDg5NTRiYzMyNGY1Njg0YTgwZjU3NmVmNTZlNTczYTMyMzU3MWI2YzQ4NjkwNWY4Na3Gew==: --dhchap-ctrl-secret DHHC-1:03:NWI0OGQ5ODNhYTQ5NzU3ZGM3MzQwNzNiMjlkMjgzZDNlZDBlNmQ3ZDkzOGJjOGJmODc3NWRhYzllNWZkMmI5Y9IF2OU=: 00:20:45.188 20:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.188 20:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:45.188 20:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.188 20:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.188 20:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.188 20:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:20:45.188 20:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.188 20:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.188 20:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.188 20:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:45.188 20:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:45.188 20:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:45.188 20:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:45.188 20:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:45.188 20:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:45.188 20:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:45.188 20:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:45.188 20:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:46.123 request: 00:20:46.123 { 00:20:46.123 "name": "nvme0", 00:20:46.123 "trtype": "tcp", 00:20:46.123 "traddr": "10.0.0.2", 00:20:46.123 "adrfam": "ipv4", 00:20:46.123 "trsvcid": "4420", 00:20:46.123 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:46.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:46.123 "prchk_reftag": false, 00:20:46.123 "prchk_guard": false, 00:20:46.123 "hdgst": false, 00:20:46.123 "ddgst": false, 00:20:46.123 "dhchap_key": "key2", 00:20:46.123 "method": "bdev_nvme_attach_controller", 00:20:46.123 "req_id": 1 00:20:46.123 } 00:20:46.123 Got JSON-RPC error response 00:20:46.123 response: 00:20:46.123 { 00:20:46.123 "code": -5, 00:20:46.123 "message": "Input/output error" 00:20:46.123 } 00:20:46.123 20:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:46.123 20:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:46.123 20:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:46.123 20:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:46.123 20:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:46.123 20:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.123 20:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.123 20:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.123 20:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.123 20:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.123 20:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.123 20:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.123 20:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:46.123 20:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:46.123 20:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:46.123 20:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:46.381 20:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:46.381 20:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:46.381 20:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:46.381 20:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:46.381 20:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:47.315 request: 00:20:47.315 { 00:20:47.315 "name": "nvme0", 00:20:47.315 "trtype": "tcp", 00:20:47.315 "traddr": "10.0.0.2", 00:20:47.315 "adrfam": "ipv4", 00:20:47.315 "trsvcid": "4420", 00:20:47.315 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:47.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:47.315 "prchk_reftag": false, 00:20:47.315 "prchk_guard": false, 00:20:47.315 "hdgst": false, 00:20:47.315 "ddgst": false, 00:20:47.315 "dhchap_key": "key1", 00:20:47.315 "dhchap_ctrlr_key": "ckey2", 00:20:47.315 "method": "bdev_nvme_attach_controller", 00:20:47.315 "req_id": 1 00:20:47.315 } 00:20:47.315 Got JSON-RPC error response 00:20:47.315 response: 00:20:47.315 { 00:20:47.315 "code": -5, 00:20:47.315 "message": "Input/output error" 00:20:47.315 } 00:20:47.315 20:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:47.315 20:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:47.315 20:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:47.315 20:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:47.315 20:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:47.315 20:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.315 20:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.315 20:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.315 20:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:20:47.315 20:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.315 20:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.315 20:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.315 20:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.315 20:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:47.315 20:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.315 20:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:47.315 20:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:47.315 20:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:47.315 20:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:47.315 20:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.315 20:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.689 request: 00:20:48.689 { 00:20:48.689 "name": "nvme0", 00:20:48.689 "trtype": "tcp", 00:20:48.689 "traddr": "10.0.0.2", 00:20:48.689 "adrfam": "ipv4", 00:20:48.689 "trsvcid": "4420", 00:20:48.689 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:48.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:48.689 "prchk_reftag": false, 00:20:48.689 "prchk_guard": false, 00:20:48.689 "hdgst": false, 00:20:48.689 "ddgst": false, 00:20:48.689 "dhchap_key": "key1", 00:20:48.689 "dhchap_ctrlr_key": "ckey1", 00:20:48.689 "method": "bdev_nvme_attach_controller", 00:20:48.689 "req_id": 1 00:20:48.689 } 00:20:48.689 Got JSON-RPC error response 00:20:48.689 response: 00:20:48.689 { 00:20:48.689 "code": -5, 00:20:48.689 "message": "Input/output error" 00:20:48.689 } 00:20:48.689 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:48.689 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:48.689 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:48.689 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:48.689 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:48.689 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.689 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.689 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.689 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2040272 00:20:48.689 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2040272 ']' 00:20:48.689 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2040272 00:20:48.689 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:48.689 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:48.689 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2040272 00:20:48.689 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:48.689 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:48.689 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2040272' 00:20:48.689 killing process with pid 2040272 00:20:48.689 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2040272 00:20:48.689 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2040272 00:20:48.689 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:48.689 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:48.689 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:48.689 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.690 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2070816 00:20:48.690 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:48.690 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2070816 00:20:48.690 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2070816 ']' 00:20:48.690 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.690 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:48.690 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.690 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:48.690 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.257 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:49.257 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:49.257 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:49.257 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:49.257 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.257 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.257 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:49.257 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2070816 00:20:49.257 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2070816 ']' 00:20:49.257 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.257 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:49.257 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.257 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:49.257 20:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.515 20:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:49.515 20:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:49.515 20:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:49.515 20:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.515 20:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.773 20:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.773 20:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:49.773 20:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.773 20:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:49.773 20:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:49.773 20:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:49.773 20:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.773 20:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:49.773 20:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.773 20:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.773 20:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.773 20:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:49.773 20:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:50.707 00:20:50.707 20:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.707 20:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.707 20:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.273 20:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.273 20:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.273 20:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.273 20:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.273 20:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.273 20:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.273 { 00:20:51.273 "cntlid": 1, 00:20:51.273 "qid": 0, 00:20:51.273 "state": "enabled", 00:20:51.273 "thread": "nvmf_tgt_poll_group_000", 00:20:51.273 "listen_address": { 00:20:51.273 "trtype": "TCP", 00:20:51.273 "adrfam": "IPv4", 00:20:51.273 "traddr": "10.0.0.2", 00:20:51.273 "trsvcid": "4420" 00:20:51.273 }, 00:20:51.273 "peer_address": { 00:20:51.273 "trtype": "TCP", 00:20:51.273 "adrfam": "IPv4", 00:20:51.273 "traddr": "10.0.0.1", 00:20:51.273 "trsvcid": "43600" 00:20:51.273 }, 00:20:51.273 "auth": { 00:20:51.273 "state": "completed", 00:20:51.273 "digest": "sha512", 00:20:51.273 "dhgroup": "ffdhe8192" 00:20:51.273 } 00:20:51.273 } 00:20:51.273 ]' 00:20:51.273 20:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.273 20:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.273 20:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.273 20:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:51.273 20:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.273 20:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.273 20:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.273 20:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.839 20:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NjI5YzZmOTAxMDIxZjllOGRkZTEwYmI4MjBlYTY0Y2ZiMWIxY2M5Y2JmYzVmNjI3ZDNlNzU4NjEwMDJmYjRiMVpBpJQ=: 00:20:53.213 20:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.213 20:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:53.213 20:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.213 20:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.213 20:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.213 20:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:53.213 20:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.213 20:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.213 20:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.213 20:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:53.213 20:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:53.471 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.471 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:53.471 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.471 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:53.471 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:53.471 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:53.471 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:53.471 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.471 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.730 request: 00:20:53.730 { 00:20:53.730 "name": "nvme0", 00:20:53.730 "trtype": "tcp", 00:20:53.730 "traddr": "10.0.0.2", 00:20:53.730 "adrfam": "ipv4", 00:20:53.730 "trsvcid": "4420", 00:20:53.730 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:53.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:53.730 "prchk_reftag": false, 00:20:53.730 "prchk_guard": false, 00:20:53.730 "hdgst": false, 00:20:53.730 "ddgst": false, 00:20:53.730 "dhchap_key": "key3", 00:20:53.730 "method": "bdev_nvme_attach_controller", 00:20:53.730 "req_id": 1 00:20:53.730 } 00:20:53.730 Got JSON-RPC error response 00:20:53.730 response: 00:20:53.730 { 00:20:53.730 "code": -5, 00:20:53.730 "message": "Input/output error" 00:20:53.730 } 00:20:53.730 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:53.730 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:53.730 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:53.730 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:53.730 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:53.730 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:53.730 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:53.730 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:54.296 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:54.296 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:54.296 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:54.296 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:54.296 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.296 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:54.296 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.296 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:54.296 20:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:54.555 request: 00:20:54.555 { 00:20:54.555 "name": "nvme0", 00:20:54.555 "trtype": "tcp", 00:20:54.555 "traddr": "10.0.0.2", 00:20:54.555 "adrfam": "ipv4", 00:20:54.555 "trsvcid": "4420", 00:20:54.555 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:54.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:54.555 "prchk_reftag": false, 00:20:54.555 "prchk_guard": false, 00:20:54.555 "hdgst": false, 00:20:54.555 "ddgst": false, 00:20:54.555 "dhchap_key": "key3", 00:20:54.555 "method": "bdev_nvme_attach_controller", 00:20:54.555 "req_id": 1 00:20:54.555 } 00:20:54.555 Got JSON-RPC error response 00:20:54.555 response: 00:20:54.556 { 00:20:54.556 "code": -5, 00:20:54.556 "message": "Input/output error" 00:20:54.556 } 00:20:54.556 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:54.556 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:54.556 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:54.556 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:54.556 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:54.556 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:54.556 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:54.556 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:54.556 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:54.556 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:54.814 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:54.814 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.814 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.814 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.814 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:54.814 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.814 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.814 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.814 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:54.814 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:54.814 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:54.814 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:54.814 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.814 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:54.815 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.815 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:54.815 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:55.073 request: 00:20:55.073 { 00:20:55.073 "name": "nvme0", 00:20:55.073 "trtype": "tcp", 00:20:55.073 "traddr": "10.0.0.2", 00:20:55.073 "adrfam": "ipv4", 00:20:55.073 "trsvcid": "4420", 00:20:55.073 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:55.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:55.073 "prchk_reftag": false, 00:20:55.073 "prchk_guard": false, 00:20:55.073 "hdgst": false, 00:20:55.073 "ddgst": false, 00:20:55.073 "dhchap_key": "key0", 00:20:55.073 "dhchap_ctrlr_key": "key1", 00:20:55.073 "method": "bdev_nvme_attach_controller", 00:20:55.073 "req_id": 1 00:20:55.073 } 00:20:55.073 Got JSON-RPC error response 00:20:55.073 response: 00:20:55.073 { 00:20:55.073 "code": -5, 00:20:55.073 "message": "Input/output error" 00:20:55.073 } 00:20:55.073 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:55.073 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:55.073 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:55.073 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:55.073 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:55.073 20:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:55.639 00:20:55.639 20:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:55.639 20:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:55.639 20:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.897 20:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.897 20:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.897 20:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.464 20:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:56.464 20:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:56.464 20:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2040504 00:20:56.464 20:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2040504 ']' 00:20:56.464 20:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2040504 00:20:56.464 20:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:56.464 20:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:56.464 20:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2040504 00:20:56.464 20:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:56.464 20:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:56.464 20:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2040504' 00:20:56.464 killing process with pid 2040504 00:20:56.464 20:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2040504 00:20:56.464 20:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2040504 00:20:56.723 20:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:56.723 20:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:56.723 20:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:56.723 20:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:56.723 20:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:56.723 20:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:56.723 20:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:56.723 rmmod nvme_tcp 00:20:56.981 rmmod nvme_fabrics 00:20:56.981 rmmod nvme_keyring 00:20:56.981 20:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:56.981 20:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:56.981 20:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:56.981 20:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2070816 ']' 00:20:56.981 20:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2070816 00:20:56.981 20:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2070816 ']' 00:20:56.981 20:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2070816 00:20:56.981 20:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:56.981 20:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:56.981 20:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2070816 00:20:56.981 20:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:56.981 20:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:56.981 20:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2070816' 00:20:56.981 killing process with pid 2070816 00:20:56.981 20:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2070816 00:20:56.981 20:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2070816 00:20:57.244 20:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:57.244 20:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:57.244 20:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:57.244 20:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:57.244 20:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:57.244 20:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.244 20:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.244 20:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.9uw /tmp/spdk.key-sha256.bA2 /tmp/spdk.key-sha384.rRd /tmp/spdk.key-sha512.zCZ /tmp/spdk.key-sha512.wvf /tmp/spdk.key-sha384.zjk /tmp/spdk.key-sha256.aRd '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:59.790 00:20:59.790 real 4m28.260s 00:20:59.790 user 10m37.462s 00:20:59.790 sys 0m35.180s 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.790 ************************************ 00:20:59.790 END TEST nvmf_auth_target 00:20:59.790 ************************************ 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:59.790 ************************************ 00:20:59.790 START TEST nvmf_bdevio_no_huge 00:20:59.790 ************************************ 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:59.790 * Looking for test storage... 00:20:59.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:59.790 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:59.791 20:15:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:02.324 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:02.324 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.324 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:02.325 Found net devices under 0000:84:00.0: cvl_0_0 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:02.325 Found net devices under 0000:84:00.1: cvl_0_1 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:02.325 20:15:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:02.325 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:02.325 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:02.325 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:02.325 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:02.325 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:02.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:21:02.325 00:21:02.325 --- 10.0.0.2 ping statistics --- 00:21:02.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.325 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:21:02.325 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:02.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:21:02.325 00:21:02.325 --- 10.0.0.1 ping statistics --- 00:21:02.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.325 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:21:02.325 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.325 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:02.325 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:02.325 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.325 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:02.325 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:02.325 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.325 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:02.325 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:02.325 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:02.325 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:02.326 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:02.326 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:02.585 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2073873 00:21:02.585 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:02.585 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2073873 00:21:02.585 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 2073873 ']' 00:21:02.585 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.585 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:02.585 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.585 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:02.585 20:15:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:02.585 [2024-07-24 20:15:06.207577] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:21:02.585 [2024-07-24 20:15:06.207694] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:02.844 [2024-07-24 20:15:06.388752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:03.103 [2024-07-24 20:15:06.643938] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.103 [2024-07-24 20:15:06.644035] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.103 [2024-07-24 20:15:06.644072] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.103 [2024-07-24 20:15:06.644101] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.103 [2024-07-24 20:15:06.644127] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.103 [2024-07-24 20:15:06.644281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:03.103 [2024-07-24 20:15:06.644403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:03.103 [2024-07-24 20:15:06.644399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:21:03.103 [2024-07-24 20:15:06.644349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:21:03.669 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:03.669 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:21:03.669 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:03.669 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:03.669 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:03.927 [2024-07-24 20:15:07.471195] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:03.927 Malloc0 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:03.927 [2024-07-24 20:15:07.512276] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.927 { 00:21:03.927 "params": { 00:21:03.927 "name": "Nvme$subsystem", 00:21:03.927 "trtype": "$TEST_TRANSPORT", 00:21:03.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.927 "adrfam": "ipv4", 00:21:03.927 "trsvcid": "$NVMF_PORT", 00:21:03.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.927 "hdgst": ${hdgst:-false}, 00:21:03.927 "ddgst": ${ddgst:-false} 00:21:03.927 }, 00:21:03.927 "method": "bdev_nvme_attach_controller" 00:21:03.927 } 00:21:03.927 EOF 00:21:03.927 )") 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:03.927 20:15:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:03.927 "params": { 00:21:03.927 "name": "Nvme1", 00:21:03.927 "trtype": "tcp", 00:21:03.927 "traddr": "10.0.0.2", 00:21:03.927 "adrfam": "ipv4", 00:21:03.927 "trsvcid": "4420", 00:21:03.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.927 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:03.927 "hdgst": false, 00:21:03.927 "ddgst": false 00:21:03.927 }, 00:21:03.927 "method": "bdev_nvme_attach_controller" 00:21:03.927 }' 00:21:03.927 [2024-07-24 20:15:07.569375] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:21:03.927 [2024-07-24 20:15:07.569484] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2074132 ] 00:21:03.927 [2024-07-24 20:15:07.697954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:04.185 [2024-07-24 20:15:07.842784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.185 [2024-07-24 20:15:07.842841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.185 [2024-07-24 20:15:07.842846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.442 I/O targets: 00:21:04.442 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:04.442 00:21:04.442 00:21:04.442 CUnit - A unit testing framework for C - Version 2.1-3 00:21:04.442 http://cunit.sourceforge.net/ 00:21:04.442 00:21:04.442 00:21:04.442 Suite: bdevio tests on: Nvme1n1 00:21:04.442 Test: blockdev write read block ...passed 00:21:04.442 Test: blockdev write zeroes read block ...passed 00:21:04.442 Test: blockdev write zeroes read no split ...passed 00:21:04.442 Test: blockdev write zeroes read split ...passed 00:21:04.700 Test: blockdev write zeroes read split partial ...passed 00:21:04.700 Test: blockdev reset ...[2024-07-24 20:15:08.279288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:04.700 [2024-07-24 20:15:08.279419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18de670 (9): Bad file descriptor 00:21:04.700 [2024-07-24 20:15:08.376417] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:04.700 passed 00:21:04.700 Test: blockdev write read 8 blocks ...passed 00:21:04.700 Test: blockdev write read size > 128k ...passed 00:21:04.700 Test: blockdev write read invalid size ...passed 00:21:04.700 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:04.700 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:04.700 Test: blockdev write read max offset ...passed 00:21:04.958 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:04.958 Test: blockdev writev readv 8 blocks ...passed 00:21:04.958 Test: blockdev writev readv 30 x 1block ...passed 00:21:04.958 Test: blockdev writev readv block ...passed 00:21:04.958 Test: blockdev writev readv size > 128k ...passed 00:21:04.958 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:04.958 Test: blockdev comparev and writev ...[2024-07-24 20:15:08.591235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:04.958 [2024-07-24 20:15:08.591281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:04.958 [2024-07-24 20:15:08.591312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:04.958 [2024-07-24 20:15:08.591333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:04.958 [2024-07-24 20:15:08.591800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:04.958 [2024-07-24 20:15:08.591839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:04.958 [2024-07-24 20:15:08.591867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:04.958 [2024-07-24 20:15:08.591887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:04.958 [2024-07-24 20:15:08.592349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:04.958 [2024-07-24 20:15:08.592381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:04.958 [2024-07-24 20:15:08.592408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:04.958 [2024-07-24 20:15:08.592436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:04.958 [2024-07-24 20:15:08.592915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:04.958 [2024-07-24 20:15:08.592946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:04.958 [2024-07-24 20:15:08.592974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:04.958 [2024-07-24 20:15:08.592994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:04.958 passed 00:21:04.958 Test: blockdev nvme passthru rw ...passed 00:21:04.958 Test: blockdev nvme passthru vendor specific ...[2024-07-24 20:15:08.674877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:04.958 [2024-07-24 20:15:08.674914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:04.958 [2024-07-24 20:15:08.675117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:04.958 [2024-07-24 20:15:08.675147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:04.958 [2024-07-24 20:15:08.675339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:04.958 [2024-07-24 20:15:08.675370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:04.958 [2024-07-24 20:15:08.675584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:04.958 [2024-07-24 20:15:08.675615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:04.958 passed 00:21:04.958 Test: blockdev nvme admin passthru ...passed 00:21:04.958 Test: blockdev copy ...passed 00:21:04.958 00:21:04.958 Run Summary: Type Total Ran Passed Failed Inactive 00:21:04.958 suites 1 1 n/a 0 0 00:21:04.958 tests 23 23 23 0 0 00:21:04.958 asserts 152 152 152 0 n/a 00:21:04.958 00:21:04.958 Elapsed time = 1.326 seconds 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:05.525 rmmod nvme_tcp 00:21:05.525 rmmod nvme_fabrics 00:21:05.525 rmmod nvme_keyring 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2073873 ']' 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2073873 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 2073873 ']' 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 2073873 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2073873 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2073873' 00:21:05.525 killing process with pid 2073873 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 2073873 00:21:05.525 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 2073873 00:21:06.092 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:06.092 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:06.092 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:06.092 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:06.092 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:06.092 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.092 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:06.092 20:15:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.628 20:15:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:08.628 00:21:08.628 real 0m8.768s 00:21:08.628 user 0m16.091s 00:21:08.628 sys 0m3.602s 00:21:08.628 20:15:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:08.628 20:15:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:08.628 ************************************ 00:21:08.628 END TEST nvmf_bdevio_no_huge 00:21:08.628 ************************************ 00:21:08.628 20:15:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:08.628 20:15:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:08.628 20:15:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:08.628 20:15:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:08.628 ************************************ 00:21:08.628 START TEST nvmf_tls 00:21:08.628 ************************************ 00:21:08.628 20:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:08.628 * Looking for test storage... 00:21:08.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:08.628 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:08.629 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.629 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.629 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.629 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:08.629 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:08.629 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:08.629 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:08.629 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:08.629 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:08.629 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.629 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:08.629 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:08.629 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:08.629 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.629 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.629 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.629 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:08.629 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:08.629 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:08.629 20:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:11.160 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:11.160 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:11.160 Found net devices under 0000:84:00.0: cvl_0_0 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:11.160 Found net devices under 0000:84:00.1: cvl_0_1 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:11.160 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:11.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:21:11.161 00:21:11.161 --- 10.0.0.2 ping statistics --- 00:21:11.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.161 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:11.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:21:11.161 00:21:11.161 --- 10.0.0.1 ping statistics --- 00:21:11.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.161 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2076859 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2076859 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2076859 ']' 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:11.161 20:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.161 [2024-07-24 20:15:14.833451] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:21:11.161 [2024-07-24 20:15:14.833556] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.161 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.161 [2024-07-24 20:15:14.928594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.419 [2024-07-24 20:15:15.069266] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.419 [2024-07-24 20:15:15.069336] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.419 [2024-07-24 20:15:15.069355] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.419 [2024-07-24 20:15:15.069372] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.419 [2024-07-24 20:15:15.069386] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.419 [2024-07-24 20:15:15.069444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.419 20:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:11.419 20:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:11.419 20:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:11.419 20:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:11.419 20:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.419 20:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.419 20:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:11.419 20:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:11.985 true 00:21:11.985 20:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:11.985 20:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:12.550 20:15:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:12.550 20:15:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:12.550 20:15:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:13.116 20:15:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:13.116 20:15:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:13.374 20:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:13.374 20:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:13.374 20:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:13.940 20:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:13.940 20:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:14.533 20:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:14.533 20:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:14.533 20:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:14.533 20:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:14.791 20:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:14.791 20:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:14.791 20:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:15.049 20:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:15.049 20:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:15.308 20:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:15.308 20:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:15.308 20:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:15.873 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:15.873 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.NORksAbXIw 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.sqp8Xck1BI 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.NORksAbXIw 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.sqp8Xck1BI 00:21:16.131 20:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:16.389 20:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:16.954 20:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.NORksAbXIw 00:21:16.954 20:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.NORksAbXIw 00:21:16.955 20:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:17.212 [2024-07-24 20:15:20.950288] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.212 20:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:17.777 20:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:18.035 [2024-07-24 20:15:21.660264] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:18.035 [2024-07-24 20:15:21.660597] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.035 20:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:18.293 malloc0 00:21:18.293 20:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:18.857 20:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NORksAbXIw 00:21:19.114 [2024-07-24 20:15:22.814114] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:19.114 20:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.NORksAbXIw 00:21:19.114 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.305 Initializing NVMe Controllers 00:21:31.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:31.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:31.305 Initialization complete. Launching workers. 00:21:31.305 ======================================================== 00:21:31.305 Latency(us) 00:21:31.305 Device Information : IOPS MiB/s Average min max 00:21:31.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5970.18 23.32 10724.45 1503.41 11695.23 00:21:31.305 ======================================================== 00:21:31.305 Total : 5970.18 23.32 10724.45 1503.41 11695.23 00:21:31.305 00:21:31.305 20:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NORksAbXIw 00:21:31.305 20:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:31.305 20:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:31.305 20:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:31.305 20:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NORksAbXIw' 00:21:31.305 20:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:31.305 20:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2078917 00:21:31.305 20:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:31.305 20:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:31.305 20:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2078917 /var/tmp/bdevperf.sock 00:21:31.305 20:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2078917 ']' 00:21:31.305 20:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.305 20:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:31.305 20:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.305 20:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:31.305 20:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.305 [2024-07-24 20:15:33.031088] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:21:31.305 [2024-07-24 20:15:33.031193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2078917 ] 00:21:31.305 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.305 [2024-07-24 20:15:33.113793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.305 [2024-07-24 20:15:33.255107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.305 20:15:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:31.305 20:15:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:31.305 20:15:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NORksAbXIw 00:21:31.305 [2024-07-24 20:15:33.746950] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:31.305 [2024-07-24 20:15:33.747108] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:31.305 TLSTESTn1 00:21:31.305 20:15:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:31.305 Running I/O for 10 seconds... 00:21:41.273 00:21:41.273 Latency(us) 00:21:41.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.273 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:41.273 Verification LBA range: start 0x0 length 0x2000 00:21:41.273 TLSTESTn1 : 10.04 2630.83 10.28 0.00 0.00 48544.41 8058.50 50875.35 00:21:41.273 =================================================================================================================== 00:21:41.273 Total : 2630.83 10.28 0.00 0.00 48544.41 8058.50 50875.35 00:21:41.273 0 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2078917 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2078917 ']' 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2078917 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2078917 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2078917' 00:21:41.273 killing process with pid 2078917 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2078917 00:21:41.273 Received shutdown signal, test time was about 10.000000 seconds 00:21:41.273 00:21:41.273 Latency(us) 00:21:41.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.273 =================================================================================================================== 00:21:41.273 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:41.273 [2024-07-24 20:15:44.087265] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2078917 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sqp8Xck1BI 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sqp8Xck1BI 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sqp8Xck1BI 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sqp8Xck1BI' 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2080303 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2080303 /var/tmp/bdevperf.sock 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2080303 ']' 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:41.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.273 [2024-07-24 20:15:44.464899] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:21:41.273 [2024-07-24 20:15:44.464992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2080303 ] 00:21:41.273 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.273 [2024-07-24 20:15:44.541016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.273 [2024-07-24 20:15:44.679213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:41.273 20:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sqp8Xck1BI 00:21:41.531 [2024-07-24 20:15:45.099876] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:41.531 [2024-07-24 20:15:45.100026] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:41.531 [2024-07-24 20:15:45.107492] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:41.531 [2024-07-24 20:15:45.107908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fd6d0 (107): Transport endpoint is not connected 00:21:41.531 [2024-07-24 20:15:45.108895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fd6d0 (9): Bad file descriptor 00:21:41.531 [2024-07-24 20:15:45.109893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:41.531 [2024-07-24 20:15:45.109922] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:41.531 [2024-07-24 20:15:45.109946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:41.531 request: 00:21:41.531 { 00:21:41.531 "name": "TLSTEST", 00:21:41.531 "trtype": "tcp", 00:21:41.531 "traddr": "10.0.0.2", 00:21:41.531 "adrfam": "ipv4", 00:21:41.531 "trsvcid": "4420", 00:21:41.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.531 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:41.531 "prchk_reftag": false, 00:21:41.531 "prchk_guard": false, 00:21:41.531 "hdgst": false, 00:21:41.531 "ddgst": false, 00:21:41.531 "psk": "/tmp/tmp.sqp8Xck1BI", 00:21:41.531 "method": "bdev_nvme_attach_controller", 00:21:41.531 "req_id": 1 00:21:41.531 } 00:21:41.531 Got JSON-RPC error response 00:21:41.531 response: 00:21:41.531 { 00:21:41.531 "code": -5, 00:21:41.532 "message": "Input/output error" 00:21:41.532 } 00:21:41.532 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2080303 00:21:41.532 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2080303 ']' 00:21:41.532 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2080303 00:21:41.532 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:41.532 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:41.532 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2080303 00:21:41.532 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:41.532 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:41.532 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2080303' 00:21:41.532 killing process with pid 2080303 00:21:41.532 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2080303 00:21:41.532 Received shutdown signal, test time was about 10.000000 seconds 00:21:41.532 00:21:41.532 Latency(us) 00:21:41.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.532 =================================================================================================================== 00:21:41.532 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:41.532 [2024-07-24 20:15:45.163696] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:41.532 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2080303 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NORksAbXIw 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NORksAbXIw 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NORksAbXIw 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NORksAbXIw' 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2080365 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2080365 /var/tmp/bdevperf.sock 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2080365 ']' 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:41.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:41.790 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.791 [2024-07-24 20:15:45.523709] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:21:41.791 [2024-07-24 20:15:45.523814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2080365 ] 00:21:41.791 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.049 [2024-07-24 20:15:45.605948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.049 [2024-07-24 20:15:45.746293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.306 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:42.306 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:42.306 20:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.NORksAbXIw 00:21:42.564 [2024-07-24 20:15:46.154162] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:42.564 [2024-07-24 20:15:46.154314] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:42.564 [2024-07-24 20:15:46.160712] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:42.564 [2024-07-24 20:15:46.160755] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:42.564 [2024-07-24 20:15:46.160821] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:42.564 [2024-07-24 20:15:46.161188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22116d0 (107): Transport endpoint is not connected 00:21:42.564 [2024-07-24 20:15:46.162169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22116d0 (9): Bad file descriptor 00:21:42.564 [2024-07-24 20:15:46.163168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:42.564 [2024-07-24 20:15:46.163199] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:42.564 [2024-07-24 20:15:46.163224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:42.564 request: 00:21:42.564 { 00:21:42.564 "name": "TLSTEST", 00:21:42.564 "trtype": "tcp", 00:21:42.564 "traddr": "10.0.0.2", 00:21:42.564 "adrfam": "ipv4", 00:21:42.564 "trsvcid": "4420", 00:21:42.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.564 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:42.564 "prchk_reftag": false, 00:21:42.564 "prchk_guard": false, 00:21:42.564 "hdgst": false, 00:21:42.564 "ddgst": false, 00:21:42.564 "psk": "/tmp/tmp.NORksAbXIw", 00:21:42.564 "method": "bdev_nvme_attach_controller", 00:21:42.564 "req_id": 1 00:21:42.564 } 00:21:42.564 Got JSON-RPC error response 00:21:42.564 response: 00:21:42.564 { 00:21:42.564 "code": -5, 00:21:42.564 "message": "Input/output error" 00:21:42.564 } 00:21:42.564 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2080365 00:21:42.565 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2080365 ']' 00:21:42.565 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2080365 00:21:42.565 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:42.565 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:42.565 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2080365 00:21:42.565 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:42.565 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:42.565 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2080365' 00:21:42.565 killing process with pid 2080365 00:21:42.565 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2080365 00:21:42.565 Received shutdown signal, test time was about 10.000000 seconds 00:21:42.565 00:21:42.565 Latency(us) 00:21:42.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.565 =================================================================================================================== 00:21:42.565 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:42.565 [2024-07-24 20:15:46.232968] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:42.565 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2080365 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NORksAbXIw 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NORksAbXIw 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NORksAbXIw 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NORksAbXIw' 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2080496 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2080496 /var/tmp/bdevperf.sock 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2080496 ']' 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:42.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:42.844 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.844 [2024-07-24 20:15:46.613754] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:21:42.844 [2024-07-24 20:15:46.613866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2080496 ] 00:21:43.108 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.108 [2024-07-24 20:15:46.698045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.108 [2024-07-24 20:15:46.837512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.366 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:43.366 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:43.366 20:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NORksAbXIw 00:21:43.932 [2024-07-24 20:15:47.529197] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:43.932 [2024-07-24 20:15:47.529387] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:43.932 [2024-07-24 20:15:47.535958] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:43.932 [2024-07-24 20:15:47.536012] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:43.932 [2024-07-24 20:15:47.536068] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:43.932 [2024-07-24 20:15:47.536451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23026d0 (107): Transport endpoint is not connected 00:21:43.932 [2024-07-24 20:15:47.537426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23026d0 (9): Bad file descriptor 00:21:43.932 [2024-07-24 20:15:47.538424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:43.932 [2024-07-24 20:15:47.538461] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:43.932 [2024-07-24 20:15:47.538489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:43.932 request: 00:21:43.932 { 00:21:43.932 "name": "TLSTEST", 00:21:43.932 "trtype": "tcp", 00:21:43.932 "traddr": "10.0.0.2", 00:21:43.932 "adrfam": "ipv4", 00:21:43.932 "trsvcid": "4420", 00:21:43.932 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:43.932 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:43.932 "prchk_reftag": false, 00:21:43.932 "prchk_guard": false, 00:21:43.932 "hdgst": false, 00:21:43.932 "ddgst": false, 00:21:43.932 "psk": "/tmp/tmp.NORksAbXIw", 00:21:43.932 "method": "bdev_nvme_attach_controller", 00:21:43.932 "req_id": 1 00:21:43.932 } 00:21:43.932 Got JSON-RPC error response 00:21:43.932 response: 00:21:43.932 { 00:21:43.932 "code": -5, 00:21:43.932 "message": "Input/output error" 00:21:43.932 } 00:21:43.932 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2080496 00:21:43.932 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2080496 ']' 00:21:43.932 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2080496 00:21:43.932 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:43.932 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:43.932 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2080496 00:21:43.932 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:43.932 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:43.932 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2080496' 00:21:43.932 killing process with pid 2080496 00:21:43.932 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2080496 00:21:43.932 Received shutdown signal, test time was about 10.000000 seconds 00:21:43.932 00:21:43.932 Latency(us) 00:21:43.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.932 =================================================================================================================== 00:21:43.932 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:43.932 [2024-07-24 20:15:47.612696] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:43.932 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2080496 00:21:44.190 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:44.190 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:44.190 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:44.190 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:44.190 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:44.191 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:44.191 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:44.191 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:44.191 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:44.191 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:44.191 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:44.191 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:44.191 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:44.191 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:44.191 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:44.191 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:44.191 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:44.191 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:44.191 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2080755 00:21:44.191 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:44.191 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:44.191 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2080755 /var/tmp/bdevperf.sock 00:21:44.191 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2080755 ']' 00:21:44.191 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:44.191 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:44.191 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:44.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:44.191 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:44.191 20:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.449 [2024-07-24 20:15:47.990358] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:21:44.449 [2024-07-24 20:15:47.990468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2080755 ] 00:21:44.449 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.449 [2024-07-24 20:15:48.072697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.449 [2024-07-24 20:15:48.210540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.707 20:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:44.707 20:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:44.707 20:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:44.966 [2024-07-24 20:15:48.636728] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:44.966 [2024-07-24 20:15:48.638146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246ce10 (9): Bad file descriptor 00:21:44.966 [2024-07-24 20:15:48.639141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:44.966 [2024-07-24 20:15:48.639171] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:44.966 [2024-07-24 20:15:48.639197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:44.966 request: 00:21:44.966 { 00:21:44.966 "name": "TLSTEST", 00:21:44.966 "trtype": "tcp", 00:21:44.966 "traddr": "10.0.0.2", 00:21:44.966 "adrfam": "ipv4", 00:21:44.966 "trsvcid": "4420", 00:21:44.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.966 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:44.966 "prchk_reftag": false, 00:21:44.966 "prchk_guard": false, 00:21:44.966 "hdgst": false, 00:21:44.966 "ddgst": false, 00:21:44.966 "method": "bdev_nvme_attach_controller", 00:21:44.966 "req_id": 1 00:21:44.966 } 00:21:44.966 Got JSON-RPC error response 00:21:44.966 response: 00:21:44.966 { 00:21:44.966 "code": -5, 00:21:44.966 "message": "Input/output error" 00:21:44.966 } 00:21:44.966 20:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2080755 00:21:44.966 20:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2080755 ']' 00:21:44.966 20:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2080755 00:21:44.966 20:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:44.966 20:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:44.966 20:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2080755 00:21:44.966 20:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:44.966 20:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:44.966 20:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2080755' 00:21:44.966 killing process with pid 2080755 00:21:44.966 20:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2080755 00:21:44.966 Received shutdown signal, test time was about 10.000000 seconds 00:21:44.966 00:21:44.966 Latency(us) 00:21:44.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.966 =================================================================================================================== 00:21:44.966 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:44.966 20:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2080755 00:21:45.533 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:45.533 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:45.533 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:45.533 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:45.533 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:45.533 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 2076859 00:21:45.533 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2076859 ']' 00:21:45.533 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2076859 00:21:45.533 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:45.533 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:45.533 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2076859 00:21:45.533 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:45.533 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:45.533 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2076859' 00:21:45.533 killing process with pid 2076859 00:21:45.533 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2076859 00:21:45.533 [2024-07-24 20:15:49.053618] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:45.533 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2076859 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.Pibd66V8Il 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.Pibd66V8Il 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2080916 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2080916 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2080916 ']' 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:45.791 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.791 [2024-07-24 20:15:49.523118] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:21:45.791 [2024-07-24 20:15:49.523226] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.791 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.050 [2024-07-24 20:15:49.613547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.050 [2024-07-24 20:15:49.751185] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.050 [2024-07-24 20:15:49.751258] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.050 [2024-07-24 20:15:49.751279] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.050 [2024-07-24 20:15:49.751296] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.050 [2024-07-24 20:15:49.751310] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.050 [2024-07-24 20:15:49.751348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.308 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:46.308 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:46.309 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:46.309 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:46.309 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.309 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.309 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.Pibd66V8Il 00:21:46.309 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Pibd66V8Il 00:21:46.309 20:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:46.567 [2024-07-24 20:15:50.186640] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.567 20:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:47.133 20:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:47.699 [2024-07-24 20:15:51.213523] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:47.699 [2024-07-24 20:15:51.213862] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.699 20:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:47.957 malloc0 00:21:48.214 20:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:48.779 20:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Pibd66V8Il 00:21:49.344 [2024-07-24 20:15:52.836960] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:49.344 20:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Pibd66V8Il 00:21:49.344 20:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:49.344 20:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:49.344 20:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:49.344 20:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Pibd66V8Il' 00:21:49.344 20:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:49.344 20:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2081328 00:21:49.344 20:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:49.344 20:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:49.344 20:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2081328 /var/tmp/bdevperf.sock 00:21:49.344 20:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2081328 ']' 00:21:49.344 20:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.344 20:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:49.344 20:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.344 20:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:49.344 20:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.344 [2024-07-24 20:15:52.912602] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:21:49.344 [2024-07-24 20:15:52.912693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2081328 ] 00:21:49.344 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.344 [2024-07-24 20:15:53.012251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.602 [2024-07-24 20:15:53.192509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.858 20:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:49.858 20:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:49.858 20:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Pibd66V8Il 00:21:50.116 [2024-07-24 20:15:53.684942] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:50.116 [2024-07-24 20:15:53.685094] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:50.116 TLSTESTn1 00:21:50.116 20:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:50.374 Running I/O for 10 seconds... 00:22:00.343 00:22:00.343 Latency(us) 00:22:00.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.343 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:00.343 Verification LBA range: start 0x0 length 0x2000 00:22:00.343 TLSTESTn1 : 10.06 2544.63 9.94 0.00 0.00 50142.49 12718.84 56700.78 00:22:00.343 =================================================================================================================== 00:22:00.343 Total : 2544.63 9.94 0.00 0.00 50142.49 12718.84 56700.78 00:22:00.343 0 00:22:00.343 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:00.343 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2081328 00:22:00.343 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2081328 ']' 00:22:00.343 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2081328 00:22:00.343 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:00.343 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:00.343 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2081328 00:22:00.601 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:00.601 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:00.601 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2081328' 00:22:00.601 killing process with pid 2081328 00:22:00.601 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2081328 00:22:00.601 Received shutdown signal, test time was about 10.000000 seconds 00:22:00.601 00:22:00.601 Latency(us) 00:22:00.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.601 =================================================================================================================== 00:22:00.601 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:00.601 [2024-07-24 20:16:04.159256] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:00.601 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2081328 00:22:00.859 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.Pibd66V8Il 00:22:00.859 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Pibd66V8Il 00:22:00.859 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:00.859 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Pibd66V8Il 00:22:00.859 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:00.859 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:00.859 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:00.859 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:00.859 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Pibd66V8Il 00:22:00.859 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:00.859 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:00.859 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:00.859 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Pibd66V8Il' 00:22:00.859 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:00.859 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2082649 00:22:00.859 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:00.859 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:00.859 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2082649 /var/tmp/bdevperf.sock 00:22:00.859 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2082649 ']' 00:22:00.859 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:00.860 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:00.860 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:00.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:00.860 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:00.860 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.860 [2024-07-24 20:16:04.549013] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:22:00.860 [2024-07-24 20:16:04.549117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2082649 ] 00:22:00.860 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.860 [2024-07-24 20:16:04.630644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.118 [2024-07-24 20:16:04.770017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.118 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.118 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:01.118 20:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Pibd66V8Il 00:22:01.685 [2024-07-24 20:16:05.180810] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:01.685 [2024-07-24 20:16:05.180897] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:01.685 [2024-07-24 20:16:05.180918] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.Pibd66V8Il 00:22:01.685 request: 00:22:01.685 { 00:22:01.685 "name": "TLSTEST", 00:22:01.685 "trtype": "tcp", 00:22:01.685 "traddr": "10.0.0.2", 00:22:01.685 "adrfam": "ipv4", 00:22:01.685 "trsvcid": "4420", 00:22:01.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:01.685 "prchk_reftag": false, 00:22:01.685 "prchk_guard": false, 00:22:01.685 "hdgst": false, 00:22:01.685 "ddgst": false, 00:22:01.685 "psk": "/tmp/tmp.Pibd66V8Il", 00:22:01.685 "method": "bdev_nvme_attach_controller", 00:22:01.685 "req_id": 1 00:22:01.685 } 00:22:01.685 Got JSON-RPC error response 00:22:01.685 response: 00:22:01.685 { 00:22:01.685 "code": -1, 00:22:01.685 "message": "Operation not permitted" 00:22:01.685 } 00:22:01.685 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2082649 00:22:01.685 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2082649 ']' 00:22:01.685 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2082649 00:22:01.685 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:01.685 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:01.685 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2082649 00:22:01.685 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:01.685 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:01.685 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2082649' 00:22:01.685 killing process with pid 2082649 00:22:01.685 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2082649 00:22:01.685 Received shutdown signal, test time was about 10.000000 seconds 00:22:01.685 00:22:01.685 Latency(us) 00:22:01.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.685 =================================================================================================================== 00:22:01.685 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:01.685 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2082649 00:22:01.944 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:01.944 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:01.944 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:01.944 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:01.944 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:01.944 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 2080916 00:22:01.944 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2080916 ']' 00:22:01.944 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2080916 00:22:01.944 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:01.944 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:01.944 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2080916 00:22:01.944 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:01.944 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:01.944 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2080916' 00:22:01.944 killing process with pid 2080916 00:22:01.944 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2080916 00:22:01.944 [2024-07-24 20:16:05.586815] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:01.944 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2080916 00:22:02.202 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:02.202 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:02.202 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:02.202 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.202 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2082795 00:22:02.202 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:02.202 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2082795 00:22:02.202 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2082795 ']' 00:22:02.202 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.202 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:02.202 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.202 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:02.202 20:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.461 [2024-07-24 20:16:05.997651] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:22:02.461 [2024-07-24 20:16:05.997766] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.461 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.461 [2024-07-24 20:16:06.089533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.461 [2024-07-24 20:16:06.225708] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.461 [2024-07-24 20:16:06.225772] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.461 [2024-07-24 20:16:06.225793] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.461 [2024-07-24 20:16:06.225810] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.461 [2024-07-24 20:16:06.225826] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.461 [2024-07-24 20:16:06.225870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.735 20:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:02.735 20:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:02.735 20:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:02.735 20:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:02.735 20:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.735 20:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.735 20:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.Pibd66V8Il 00:22:02.735 20:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:02.735 20:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Pibd66V8Il 00:22:02.735 20:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:22:02.735 20:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:02.735 20:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:22:02.735 20:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:02.735 20:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.Pibd66V8Il 00:22:02.735 20:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Pibd66V8Il 00:22:02.735 20:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:03.005 [2024-07-24 20:16:06.702696] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.005 20:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:03.572 20:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:03.830 [2024-07-24 20:16:07.569068] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:03.830 [2024-07-24 20:16:07.569380] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.830 20:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:04.395 malloc0 00:22:04.654 20:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:04.912 20:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Pibd66V8Il 00:22:05.170 [2024-07-24 20:16:08.867342] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:05.170 [2024-07-24 20:16:08.867395] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:05.170 [2024-07-24 20:16:08.867450] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:05.170 request: 00:22:05.170 { 00:22:05.170 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.170 "host": "nqn.2016-06.io.spdk:host1", 00:22:05.170 "psk": "/tmp/tmp.Pibd66V8Il", 00:22:05.170 "method": "nvmf_subsystem_add_host", 00:22:05.170 "req_id": 1 00:22:05.170 } 00:22:05.170 Got JSON-RPC error response 00:22:05.170 response: 00:22:05.170 { 00:22:05.170 "code": -32603, 00:22:05.170 "message": "Internal error" 00:22:05.170 } 00:22:05.170 20:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:05.170 20:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:05.170 20:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:05.171 20:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:05.171 20:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 2082795 00:22:05.171 20:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2082795 ']' 00:22:05.171 20:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2082795 00:22:05.171 20:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:05.171 20:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:05.171 20:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2082795 00:22:05.171 20:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:05.171 20:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:05.171 20:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2082795' 00:22:05.171 killing process with pid 2082795 00:22:05.171 20:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2082795 00:22:05.171 20:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2082795 00:22:05.736 20:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.Pibd66V8Il 00:22:05.736 20:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:05.736 20:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:05.736 20:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:05.736 20:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.736 20:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2083221 00:22:05.736 20:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:05.736 20:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2083221 00:22:05.736 20:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2083221 ']' 00:22:05.736 20:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.736 20:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:05.736 20:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.736 20:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:05.736 20:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.736 [2024-07-24 20:16:09.328255] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:22:05.736 [2024-07-24 20:16:09.328353] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.736 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.736 [2024-07-24 20:16:09.417975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.994 [2024-07-24 20:16:09.560506] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.994 [2024-07-24 20:16:09.560577] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.994 [2024-07-24 20:16:09.560597] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.994 [2024-07-24 20:16:09.560614] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.994 [2024-07-24 20:16:09.560628] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.994 [2024-07-24 20:16:09.560666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.994 20:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:05.994 20:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:05.994 20:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:05.994 20:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:05.994 20:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.994 20:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.994 20:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.Pibd66V8Il 00:22:05.994 20:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Pibd66V8Il 00:22:05.994 20:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:06.561 [2024-07-24 20:16:10.058855] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.561 20:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:06.819 20:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:07.077 [2024-07-24 20:16:10.736688] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:07.077 [2024-07-24 20:16:10.736995] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.077 20:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:07.335 malloc0 00:22:07.335 20:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:07.911 20:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Pibd66V8Il 00:22:08.168 [2024-07-24 20:16:11.768991] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:08.168 20:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2083510 00:22:08.168 20:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:08.168 20:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:08.168 20:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2083510 /var/tmp/bdevperf.sock 00:22:08.168 20:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2083510 ']' 00:22:08.168 20:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:08.168 20:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:08.168 20:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:08.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:08.168 20:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:08.168 20:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:08.168 [2024-07-24 20:16:11.845832] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:22:08.168 [2024-07-24 20:16:11.845923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2083510 ] 00:22:08.168 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.168 [2024-07-24 20:16:11.926839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.426 [2024-07-24 20:16:12.067146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.426 20:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:08.426 20:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:08.426 20:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Pibd66V8Il 00:22:08.992 [2024-07-24 20:16:12.730524] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:08.992 [2024-07-24 20:16:12.730681] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:09.250 TLSTESTn1 00:22:09.250 20:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:09.508 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:09.508 "subsystems": [ 00:22:09.508 { 00:22:09.508 "subsystem": "keyring", 00:22:09.508 "config": [] 00:22:09.508 }, 00:22:09.508 { 00:22:09.508 "subsystem": "iobuf", 00:22:09.508 "config": [ 00:22:09.508 { 00:22:09.508 "method": "iobuf_set_options", 00:22:09.508 "params": { 00:22:09.508 "small_pool_count": 8192, 00:22:09.508 "large_pool_count": 1024, 00:22:09.508 "small_bufsize": 8192, 00:22:09.508 "large_bufsize": 135168 00:22:09.508 } 00:22:09.508 } 00:22:09.508 ] 00:22:09.508 }, 00:22:09.508 { 00:22:09.508 "subsystem": "sock", 00:22:09.508 "config": [ 00:22:09.508 { 00:22:09.508 "method": "sock_set_default_impl", 00:22:09.508 "params": { 00:22:09.508 "impl_name": "posix" 00:22:09.508 } 00:22:09.508 }, 00:22:09.508 { 00:22:09.508 "method": "sock_impl_set_options", 00:22:09.508 "params": { 00:22:09.508 "impl_name": "ssl", 00:22:09.508 "recv_buf_size": 4096, 00:22:09.508 "send_buf_size": 4096, 00:22:09.508 "enable_recv_pipe": true, 00:22:09.508 "enable_quickack": false, 00:22:09.508 "enable_placement_id": 0, 00:22:09.508 "enable_zerocopy_send_server": true, 00:22:09.508 "enable_zerocopy_send_client": false, 00:22:09.508 "zerocopy_threshold": 0, 00:22:09.508 "tls_version": 0, 00:22:09.508 "enable_ktls": false 00:22:09.508 } 00:22:09.508 }, 00:22:09.508 { 00:22:09.508 "method": "sock_impl_set_options", 00:22:09.508 "params": { 00:22:09.508 "impl_name": "posix", 00:22:09.508 "recv_buf_size": 2097152, 00:22:09.508 "send_buf_size": 2097152, 00:22:09.508 "enable_recv_pipe": true, 00:22:09.508 "enable_quickack": false, 00:22:09.508 "enable_placement_id": 0, 00:22:09.508 "enable_zerocopy_send_server": true, 00:22:09.508 "enable_zerocopy_send_client": false, 00:22:09.508 "zerocopy_threshold": 0, 00:22:09.508 "tls_version": 0, 00:22:09.508 "enable_ktls": false 00:22:09.508 } 00:22:09.508 } 00:22:09.508 ] 00:22:09.508 }, 00:22:09.508 { 00:22:09.508 "subsystem": "vmd", 00:22:09.508 "config": [] 00:22:09.508 }, 00:22:09.508 { 00:22:09.508 "subsystem": "accel", 00:22:09.508 "config": [ 00:22:09.508 { 00:22:09.508 "method": "accel_set_options", 00:22:09.508 "params": { 00:22:09.508 "small_cache_size": 128, 00:22:09.508 "large_cache_size": 16, 00:22:09.508 "task_count": 2048, 00:22:09.508 "sequence_count": 2048, 00:22:09.508 "buf_count": 2048 00:22:09.508 } 00:22:09.508 } 00:22:09.508 ] 00:22:09.508 }, 00:22:09.508 { 00:22:09.508 "subsystem": "bdev", 00:22:09.508 "config": [ 00:22:09.508 { 00:22:09.508 "method": "bdev_set_options", 00:22:09.508 "params": { 00:22:09.508 "bdev_io_pool_size": 65535, 00:22:09.508 "bdev_io_cache_size": 256, 00:22:09.508 "bdev_auto_examine": true, 00:22:09.508 "iobuf_small_cache_size": 128, 00:22:09.508 "iobuf_large_cache_size": 16 00:22:09.508 } 00:22:09.508 }, 00:22:09.508 { 00:22:09.508 "method": "bdev_raid_set_options", 00:22:09.508 "params": { 00:22:09.508 "process_window_size_kb": 1024, 00:22:09.508 "process_max_bandwidth_mb_sec": 0 00:22:09.508 } 00:22:09.508 }, 00:22:09.508 { 00:22:09.508 "method": "bdev_iscsi_set_options", 00:22:09.508 "params": { 00:22:09.508 "timeout_sec": 30 00:22:09.508 } 00:22:09.508 }, 00:22:09.508 { 00:22:09.508 "method": "bdev_nvme_set_options", 00:22:09.508 "params": { 00:22:09.508 "action_on_timeout": "none", 00:22:09.508 "timeout_us": 0, 00:22:09.508 "timeout_admin_us": 0, 00:22:09.508 "keep_alive_timeout_ms": 10000, 00:22:09.508 "arbitration_burst": 0, 00:22:09.508 "low_priority_weight": 0, 00:22:09.508 "medium_priority_weight": 0, 00:22:09.508 "high_priority_weight": 0, 00:22:09.508 "nvme_adminq_poll_period_us": 10000, 00:22:09.508 "nvme_ioq_poll_period_us": 0, 00:22:09.508 "io_queue_requests": 0, 00:22:09.508 "delay_cmd_submit": true, 00:22:09.508 "transport_retry_count": 4, 00:22:09.508 "bdev_retry_count": 3, 00:22:09.508 "transport_ack_timeout": 0, 00:22:09.508 "ctrlr_loss_timeout_sec": 0, 00:22:09.508 "reconnect_delay_sec": 0, 00:22:09.508 "fast_io_fail_timeout_sec": 0, 00:22:09.508 "disable_auto_failback": false, 00:22:09.508 "generate_uuids": false, 00:22:09.508 "transport_tos": 0, 00:22:09.508 "nvme_error_stat": false, 00:22:09.508 "rdma_srq_size": 0, 00:22:09.508 "io_path_stat": false, 00:22:09.508 "allow_accel_sequence": false, 00:22:09.508 "rdma_max_cq_size": 0, 00:22:09.509 "rdma_cm_event_timeout_ms": 0, 00:22:09.509 "dhchap_digests": [ 00:22:09.509 "sha256", 00:22:09.509 "sha384", 00:22:09.509 "sha512" 00:22:09.509 ], 00:22:09.509 "dhchap_dhgroups": [ 00:22:09.509 "null", 00:22:09.509 "ffdhe2048", 00:22:09.509 "ffdhe3072", 00:22:09.509 "ffdhe4096", 00:22:09.509 "ffdhe6144", 00:22:09.509 "ffdhe8192" 00:22:09.509 ] 00:22:09.509 } 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "method": "bdev_nvme_set_hotplug", 00:22:09.509 "params": { 00:22:09.509 "period_us": 100000, 00:22:09.509 "enable": false 00:22:09.509 } 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "method": "bdev_malloc_create", 00:22:09.509 "params": { 00:22:09.509 "name": "malloc0", 00:22:09.509 "num_blocks": 8192, 00:22:09.509 "block_size": 4096, 00:22:09.509 "physical_block_size": 4096, 00:22:09.509 "uuid": "396ff5f3-966e-4f74-8953-be3c625394b6", 00:22:09.509 "optimal_io_boundary": 0, 00:22:09.509 "md_size": 0, 00:22:09.509 "dif_type": 0, 00:22:09.509 "dif_is_head_of_md": false, 00:22:09.509 "dif_pi_format": 0 00:22:09.509 } 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "method": "bdev_wait_for_examine" 00:22:09.509 } 00:22:09.509 ] 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "subsystem": "nbd", 00:22:09.509 "config": [] 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "subsystem": "scheduler", 00:22:09.509 "config": [ 00:22:09.509 { 00:22:09.509 "method": "framework_set_scheduler", 00:22:09.509 "params": { 00:22:09.509 "name": "static" 00:22:09.509 } 00:22:09.509 } 00:22:09.509 ] 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "subsystem": "nvmf", 00:22:09.509 "config": [ 00:22:09.509 { 00:22:09.509 "method": "nvmf_set_config", 00:22:09.509 "params": { 00:22:09.509 "discovery_filter": "match_any", 00:22:09.509 "admin_cmd_passthru": { 00:22:09.509 "identify_ctrlr": false 00:22:09.509 } 00:22:09.509 } 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "method": "nvmf_set_max_subsystems", 00:22:09.509 "params": { 00:22:09.509 "max_subsystems": 1024 00:22:09.509 } 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "method": "nvmf_set_crdt", 00:22:09.509 "params": { 00:22:09.509 "crdt1": 0, 00:22:09.509 "crdt2": 0, 00:22:09.509 "crdt3": 0 00:22:09.509 } 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "method": "nvmf_create_transport", 00:22:09.509 "params": { 00:22:09.509 "trtype": "TCP", 00:22:09.509 "max_queue_depth": 128, 00:22:09.509 "max_io_qpairs_per_ctrlr": 127, 00:22:09.509 "in_capsule_data_size": 4096, 00:22:09.509 "max_io_size": 131072, 00:22:09.509 "io_unit_size": 131072, 00:22:09.509 "max_aq_depth": 128, 00:22:09.509 "num_shared_buffers": 511, 00:22:09.509 "buf_cache_size": 4294967295, 00:22:09.509 "dif_insert_or_strip": false, 00:22:09.509 "zcopy": false, 00:22:09.509 "c2h_success": false, 00:22:09.509 "sock_priority": 0, 00:22:09.509 "abort_timeout_sec": 1, 00:22:09.509 "ack_timeout": 0, 00:22:09.509 "data_wr_pool_size": 0 00:22:09.509 } 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "method": "nvmf_create_subsystem", 00:22:09.509 "params": { 00:22:09.509 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.509 "allow_any_host": false, 00:22:09.509 "serial_number": "SPDK00000000000001", 00:22:09.509 "model_number": "SPDK bdev Controller", 00:22:09.509 "max_namespaces": 10, 00:22:09.509 "min_cntlid": 1, 00:22:09.509 "max_cntlid": 65519, 00:22:09.509 "ana_reporting": false 00:22:09.509 } 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "method": "nvmf_subsystem_add_host", 00:22:09.509 "params": { 00:22:09.509 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.509 "host": "nqn.2016-06.io.spdk:host1", 00:22:09.509 "psk": "/tmp/tmp.Pibd66V8Il" 00:22:09.509 } 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "method": "nvmf_subsystem_add_ns", 00:22:09.509 "params": { 00:22:09.509 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.509 "namespace": { 00:22:09.509 "nsid": 1, 00:22:09.509 "bdev_name": "malloc0", 00:22:09.509 "nguid": "396FF5F3966E4F748953BE3C625394B6", 00:22:09.509 "uuid": "396ff5f3-966e-4f74-8953-be3c625394b6", 00:22:09.509 "no_auto_visible": false 00:22:09.509 } 00:22:09.509 } 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "method": "nvmf_subsystem_add_listener", 00:22:09.509 "params": { 00:22:09.509 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.509 "listen_address": { 00:22:09.509 "trtype": "TCP", 00:22:09.509 "adrfam": "IPv4", 00:22:09.509 "traddr": "10.0.0.2", 00:22:09.509 "trsvcid": "4420" 00:22:09.509 }, 00:22:09.509 "secure_channel": true 00:22:09.509 } 00:22:09.509 } 00:22:09.509 ] 00:22:09.509 } 00:22:09.509 ] 00:22:09.509 }' 00:22:09.509 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:09.767 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:09.767 "subsystems": [ 00:22:09.767 { 00:22:09.767 "subsystem": "keyring", 00:22:09.767 "config": [] 00:22:09.767 }, 00:22:09.767 { 00:22:09.767 "subsystem": "iobuf", 00:22:09.767 "config": [ 00:22:09.767 { 00:22:09.767 "method": "iobuf_set_options", 00:22:09.767 "params": { 00:22:09.767 "small_pool_count": 8192, 00:22:09.767 "large_pool_count": 1024, 00:22:09.767 "small_bufsize": 8192, 00:22:09.767 "large_bufsize": 135168 00:22:09.767 } 00:22:09.767 } 00:22:09.767 ] 00:22:09.767 }, 00:22:09.767 { 00:22:09.767 "subsystem": "sock", 00:22:09.767 "config": [ 00:22:09.767 { 00:22:09.767 "method": "sock_set_default_impl", 00:22:09.767 "params": { 00:22:09.767 "impl_name": "posix" 00:22:09.767 } 00:22:09.767 }, 00:22:09.767 { 00:22:09.767 "method": "sock_impl_set_options", 00:22:09.767 "params": { 00:22:09.767 "impl_name": "ssl", 00:22:09.767 "recv_buf_size": 4096, 00:22:09.767 "send_buf_size": 4096, 00:22:09.767 "enable_recv_pipe": true, 00:22:09.767 "enable_quickack": false, 00:22:09.767 "enable_placement_id": 0, 00:22:09.767 "enable_zerocopy_send_server": true, 00:22:09.767 "enable_zerocopy_send_client": false, 00:22:09.767 "zerocopy_threshold": 0, 00:22:09.767 "tls_version": 0, 00:22:09.767 "enable_ktls": false 00:22:09.767 } 00:22:09.767 }, 00:22:09.767 { 00:22:09.767 "method": "sock_impl_set_options", 00:22:09.767 "params": { 00:22:09.767 "impl_name": "posix", 00:22:09.767 "recv_buf_size": 2097152, 00:22:09.767 "send_buf_size": 2097152, 00:22:09.767 "enable_recv_pipe": true, 00:22:09.767 "enable_quickack": false, 00:22:09.767 "enable_placement_id": 0, 00:22:09.767 "enable_zerocopy_send_server": true, 00:22:09.767 "enable_zerocopy_send_client": false, 00:22:09.767 "zerocopy_threshold": 0, 00:22:09.767 "tls_version": 0, 00:22:09.767 "enable_ktls": false 00:22:09.767 } 00:22:09.767 } 00:22:09.767 ] 00:22:09.767 }, 00:22:09.767 { 00:22:09.767 "subsystem": "vmd", 00:22:09.767 "config": [] 00:22:09.767 }, 00:22:09.767 { 00:22:09.767 "subsystem": "accel", 00:22:09.767 "config": [ 00:22:09.767 { 00:22:09.767 "method": "accel_set_options", 00:22:09.767 "params": { 00:22:09.767 "small_cache_size": 128, 00:22:09.767 "large_cache_size": 16, 00:22:09.767 "task_count": 2048, 00:22:09.767 "sequence_count": 2048, 00:22:09.767 "buf_count": 2048 00:22:09.767 } 00:22:09.767 } 00:22:09.767 ] 00:22:09.767 }, 00:22:09.767 { 00:22:09.767 "subsystem": "bdev", 00:22:09.767 "config": [ 00:22:09.767 { 00:22:09.767 "method": "bdev_set_options", 00:22:09.767 "params": { 00:22:09.767 "bdev_io_pool_size": 65535, 00:22:09.767 "bdev_io_cache_size": 256, 00:22:09.767 "bdev_auto_examine": true, 00:22:09.767 "iobuf_small_cache_size": 128, 00:22:09.767 "iobuf_large_cache_size": 16 00:22:09.767 } 00:22:09.767 }, 00:22:09.767 { 00:22:09.767 "method": "bdev_raid_set_options", 00:22:09.767 "params": { 00:22:09.767 "process_window_size_kb": 1024, 00:22:09.767 "process_max_bandwidth_mb_sec": 0 00:22:09.767 } 00:22:09.767 }, 00:22:09.767 { 00:22:09.767 "method": "bdev_iscsi_set_options", 00:22:09.767 "params": { 00:22:09.767 "timeout_sec": 30 00:22:09.767 } 00:22:09.767 }, 00:22:09.767 { 00:22:09.767 "method": "bdev_nvme_set_options", 00:22:09.767 "params": { 00:22:09.767 "action_on_timeout": "none", 00:22:09.767 "timeout_us": 0, 00:22:09.767 "timeout_admin_us": 0, 00:22:09.767 "keep_alive_timeout_ms": 10000, 00:22:09.767 "arbitration_burst": 0, 00:22:09.767 "low_priority_weight": 0, 00:22:09.767 "medium_priority_weight": 0, 00:22:09.767 "high_priority_weight": 0, 00:22:09.767 "nvme_adminq_poll_period_us": 10000, 00:22:09.767 "nvme_ioq_poll_period_us": 0, 00:22:09.767 "io_queue_requests": 512, 00:22:09.767 "delay_cmd_submit": true, 00:22:09.767 "transport_retry_count": 4, 00:22:09.767 "bdev_retry_count": 3, 00:22:09.767 "transport_ack_timeout": 0, 00:22:09.767 "ctrlr_loss_timeout_sec": 0, 00:22:09.767 "reconnect_delay_sec": 0, 00:22:09.767 "fast_io_fail_timeout_sec": 0, 00:22:09.767 "disable_auto_failback": false, 00:22:09.767 "generate_uuids": false, 00:22:09.767 "transport_tos": 0, 00:22:09.767 "nvme_error_stat": false, 00:22:09.767 "rdma_srq_size": 0, 00:22:09.767 "io_path_stat": false, 00:22:09.767 "allow_accel_sequence": false, 00:22:09.767 "rdma_max_cq_size": 0, 00:22:09.767 "rdma_cm_event_timeout_ms": 0, 00:22:09.767 "dhchap_digests": [ 00:22:09.767 "sha256", 00:22:09.767 "sha384", 00:22:09.767 "sha512" 00:22:09.767 ], 00:22:09.767 "dhchap_dhgroups": [ 00:22:09.767 "null", 00:22:09.767 "ffdhe2048", 00:22:09.767 "ffdhe3072", 00:22:09.767 "ffdhe4096", 00:22:09.767 "ffdhe6144", 00:22:09.767 "ffdhe8192" 00:22:09.767 ] 00:22:09.767 } 00:22:09.767 }, 00:22:09.767 { 00:22:09.767 "method": "bdev_nvme_attach_controller", 00:22:09.767 "params": { 00:22:09.767 "name": "TLSTEST", 00:22:09.767 "trtype": "TCP", 00:22:09.767 "adrfam": "IPv4", 00:22:09.767 "traddr": "10.0.0.2", 00:22:09.767 "trsvcid": "4420", 00:22:09.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.767 "prchk_reftag": false, 00:22:09.767 "prchk_guard": false, 00:22:09.767 "ctrlr_loss_timeout_sec": 0, 00:22:09.767 "reconnect_delay_sec": 0, 00:22:09.767 "fast_io_fail_timeout_sec": 0, 00:22:09.767 "psk": "/tmp/tmp.Pibd66V8Il", 00:22:09.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:09.767 "hdgst": false, 00:22:09.767 "ddgst": false 00:22:09.767 } 00:22:09.767 }, 00:22:09.767 { 00:22:09.767 "method": "bdev_nvme_set_hotplug", 00:22:09.767 "params": { 00:22:09.767 "period_us": 100000, 00:22:09.767 "enable": false 00:22:09.767 } 00:22:09.767 }, 00:22:09.767 { 00:22:09.767 "method": "bdev_wait_for_examine" 00:22:09.767 } 00:22:09.767 ] 00:22:09.767 }, 00:22:09.767 { 00:22:09.767 "subsystem": "nbd", 00:22:09.768 "config": [] 00:22:09.768 } 00:22:09.768 ] 00:22:09.768 }' 00:22:09.768 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 2083510 00:22:09.768 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2083510 ']' 00:22:09.768 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2083510 00:22:09.768 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:09.768 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:09.768 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2083510 00:22:10.025 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:10.025 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:10.025 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2083510' 00:22:10.025 killing process with pid 2083510 00:22:10.025 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2083510 00:22:10.025 Received shutdown signal, test time was about 10.000000 seconds 00:22:10.025 00:22:10.025 Latency(us) 00:22:10.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.025 =================================================================================================================== 00:22:10.025 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:10.025 [2024-07-24 20:16:13.562946] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:10.025 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2083510 00:22:10.283 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 2083221 00:22:10.283 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2083221 ']' 00:22:10.283 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2083221 00:22:10.283 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:10.283 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:10.283 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2083221 00:22:10.283 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:10.283 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:10.283 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2083221' 00:22:10.283 killing process with pid 2083221 00:22:10.283 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2083221 00:22:10.283 [2024-07-24 20:16:13.919124] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:10.283 20:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2083221 00:22:10.542 20:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:10.542 20:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:10.542 20:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:10.542 "subsystems": [ 00:22:10.542 { 00:22:10.542 "subsystem": "keyring", 00:22:10.542 "config": [] 00:22:10.542 }, 00:22:10.542 { 00:22:10.542 "subsystem": "iobuf", 00:22:10.542 "config": [ 00:22:10.542 { 00:22:10.542 "method": "iobuf_set_options", 00:22:10.542 "params": { 00:22:10.542 "small_pool_count": 8192, 00:22:10.542 "large_pool_count": 1024, 00:22:10.542 "small_bufsize": 8192, 00:22:10.542 "large_bufsize": 135168 00:22:10.542 } 00:22:10.542 } 00:22:10.542 ] 00:22:10.542 }, 00:22:10.542 { 00:22:10.542 "subsystem": "sock", 00:22:10.542 "config": [ 00:22:10.542 { 00:22:10.542 "method": "sock_set_default_impl", 00:22:10.542 "params": { 00:22:10.542 "impl_name": "posix" 00:22:10.542 } 00:22:10.542 }, 00:22:10.542 { 00:22:10.542 "method": "sock_impl_set_options", 00:22:10.542 "params": { 00:22:10.542 "impl_name": "ssl", 00:22:10.542 "recv_buf_size": 4096, 00:22:10.542 "send_buf_size": 4096, 00:22:10.542 "enable_recv_pipe": true, 00:22:10.542 "enable_quickack": false, 00:22:10.542 "enable_placement_id": 0, 00:22:10.542 "enable_zerocopy_send_server": true, 00:22:10.542 "enable_zerocopy_send_client": false, 00:22:10.542 "zerocopy_threshold": 0, 00:22:10.542 "tls_version": 0, 00:22:10.542 "enable_ktls": false 00:22:10.542 } 00:22:10.542 }, 00:22:10.542 { 00:22:10.542 "method": "sock_impl_set_options", 00:22:10.542 "params": { 00:22:10.542 "impl_name": "posix", 00:22:10.542 "recv_buf_size": 2097152, 00:22:10.542 "send_buf_size": 2097152, 00:22:10.542 "enable_recv_pipe": true, 00:22:10.542 "enable_quickack": false, 00:22:10.542 "enable_placement_id": 0, 00:22:10.542 "enable_zerocopy_send_server": true, 00:22:10.542 "enable_zerocopy_send_client": false, 00:22:10.542 "zerocopy_threshold": 0, 00:22:10.542 "tls_version": 0, 00:22:10.542 "enable_ktls": false 00:22:10.542 } 00:22:10.542 } 00:22:10.542 ] 00:22:10.542 }, 00:22:10.542 { 00:22:10.542 "subsystem": "vmd", 00:22:10.542 "config": [] 00:22:10.542 }, 00:22:10.542 { 00:22:10.542 "subsystem": "accel", 00:22:10.542 "config": [ 00:22:10.542 { 00:22:10.542 "method": "accel_set_options", 00:22:10.542 "params": { 00:22:10.542 "small_cache_size": 128, 00:22:10.542 "large_cache_size": 16, 00:22:10.542 "task_count": 2048, 00:22:10.542 "sequence_count": 2048, 00:22:10.542 "buf_count": 2048 00:22:10.542 } 00:22:10.542 } 00:22:10.542 ] 00:22:10.542 }, 00:22:10.542 { 00:22:10.542 "subsystem": "bdev", 00:22:10.542 "config": [ 00:22:10.542 { 00:22:10.542 "method": "bdev_set_options", 00:22:10.542 "params": { 00:22:10.542 "bdev_io_pool_size": 65535, 00:22:10.542 "bdev_io_cache_size": 256, 00:22:10.542 "bdev_auto_examine": true, 00:22:10.542 "iobuf_small_cache_size": 128, 00:22:10.542 "iobuf_large_cache_size": 16 00:22:10.542 } 00:22:10.542 }, 00:22:10.542 { 00:22:10.542 "method": "bdev_raid_set_options", 00:22:10.542 "params": { 00:22:10.542 "process_window_size_kb": 1024, 00:22:10.542 "process_max_bandwidth_mb_sec": 0 00:22:10.542 } 00:22:10.542 }, 00:22:10.542 { 00:22:10.542 "method": "bdev_iscsi_set_options", 00:22:10.542 "params": { 00:22:10.542 "timeout_sec": 30 00:22:10.542 } 00:22:10.542 }, 00:22:10.542 { 00:22:10.542 "method": "bdev_nvme_set_options", 00:22:10.542 "params": { 00:22:10.542 "action_on_timeout": "none", 00:22:10.542 "timeout_us": 0, 00:22:10.542 "timeout_admin_us": 0, 00:22:10.542 "keep_alive_timeout_ms": 10000, 00:22:10.542 "arbitration_burst": 0, 00:22:10.542 "low_priority_weight": 0, 00:22:10.542 "medium_priority_weight": 0, 00:22:10.542 "high_priority_weight": 0, 00:22:10.542 "nvme_adminq_poll_period_us": 10000, 00:22:10.542 "nvme_ioq_poll_period_us": 0, 00:22:10.542 "io_queue_requests": 0, 00:22:10.542 "delay_cmd_submit": true, 00:22:10.542 "transport_retry_count": 4, 00:22:10.542 "bdev_retry_count": 3, 00:22:10.542 "transport_ack_timeout": 0, 00:22:10.542 "ctrlr_loss_timeout_sec": 0, 00:22:10.542 "reconnect_delay_sec": 0, 00:22:10.542 "fast_io_fail_timeout_sec": 0, 00:22:10.542 "disable_auto_failback": false, 00:22:10.542 "generate_uuids": false, 00:22:10.542 "transport_tos": 0, 00:22:10.542 "nvme_error_stat": false, 00:22:10.542 "rdma_srq_size": 0, 00:22:10.542 "io_path_stat": false, 00:22:10.542 "allow_accel_sequence": false, 00:22:10.542 "rdma_max_cq_size": 0, 00:22:10.542 "rdma_cm_event_timeout_ms": 0, 00:22:10.542 "dhchap_digests": [ 00:22:10.542 "sha256", 00:22:10.542 "sha384", 00:22:10.542 "sha512" 00:22:10.542 ], 00:22:10.542 "dhchap_dhgroups": [ 00:22:10.542 "null", 00:22:10.542 "ffdhe2048", 00:22:10.542 "ffdhe3072", 00:22:10.542 "ffdhe4096", 00:22:10.542 "ffdhe6144", 00:22:10.542 "ffdhe8192" 00:22:10.542 ] 00:22:10.542 } 00:22:10.542 }, 00:22:10.542 { 00:22:10.542 "method": "bdev_nvme_set_hotplug", 00:22:10.542 "params": { 00:22:10.542 "period_us": 100000, 00:22:10.542 "enable": false 00:22:10.542 } 00:22:10.542 }, 00:22:10.542 { 00:22:10.542 "method": "bdev_malloc_create", 00:22:10.542 "params": { 00:22:10.542 "name": "malloc0", 00:22:10.542 "num_blocks": 8192, 00:22:10.542 "block_size": 4096, 00:22:10.542 "physical_block_size": 4096, 00:22:10.542 "uuid": "396ff5f3-966e-4f74-8953-be3c625394b6", 00:22:10.542 "optimal_io_boundary": 0, 00:22:10.542 "md_size": 0, 00:22:10.542 "dif_type": 0, 00:22:10.542 "dif_is_head_of_md": false, 00:22:10.542 "dif_pi_format": 0 00:22:10.542 } 00:22:10.542 }, 00:22:10.542 { 00:22:10.542 "method": "bdev_wait_for_examine" 00:22:10.542 } 00:22:10.542 ] 00:22:10.542 }, 00:22:10.542 { 00:22:10.542 "subsystem": "nbd", 00:22:10.542 "config": [] 00:22:10.542 }, 00:22:10.542 { 00:22:10.542 "subsystem": "scheduler", 00:22:10.542 "config": [ 00:22:10.542 { 00:22:10.542 "method": "framework_set_scheduler", 00:22:10.542 "params": { 00:22:10.542 "name": "static" 00:22:10.542 } 00:22:10.542 } 00:22:10.542 ] 00:22:10.542 }, 00:22:10.542 { 00:22:10.542 "subsystem": "nvmf", 00:22:10.542 "config": [ 00:22:10.542 { 00:22:10.542 "method": "nvmf_set_config", 00:22:10.542 "params": { 00:22:10.542 "discovery_filter": "match_any", 00:22:10.542 "admin_cmd_passthru": { 00:22:10.542 "identify_ctrlr": false 00:22:10.542 } 00:22:10.542 } 00:22:10.542 }, 00:22:10.542 { 00:22:10.542 "method": "nvmf_set_max_subsystems", 00:22:10.542 "params": { 00:22:10.542 "max_subsystems": 1024 00:22:10.542 } 00:22:10.542 }, 00:22:10.542 { 00:22:10.542 "method": "nvmf_set_crdt", 00:22:10.542 "params": { 00:22:10.542 "crdt1": 0, 00:22:10.542 "crdt2": 0, 00:22:10.542 "crdt3": 0 00:22:10.542 } 00:22:10.542 }, 00:22:10.542 { 00:22:10.542 "method": "nvmf_create_transport", 00:22:10.542 "params": { 00:22:10.542 "trtype": "TCP", 00:22:10.542 "max_queue_depth": 128, 00:22:10.542 "max_io_qpairs_per_ctrlr": 127, 00:22:10.542 "in_capsule_data_size": 4096, 00:22:10.542 "max_io_size": 131072, 00:22:10.542 "io_unit_size": 131072, 00:22:10.542 "max_aq_depth": 128, 00:22:10.543 "num_shared_buffers": 511, 00:22:10.543 "buf_cache_size": 4294967295, 00:22:10.543 "dif_insert_or_strip": false, 00:22:10.543 "zcopy": false, 00:22:10.543 "c2h_success": false, 00:22:10.543 "sock_priority": 0, 00:22:10.543 "abort_timeout_sec": 1, 00:22:10.543 "ack_timeout": 0, 00:22:10.543 "data_wr_pool_size": 0 00:22:10.543 } 00:22:10.543 }, 00:22:10.543 { 00:22:10.543 "method": "nvmf_create_subsystem", 00:22:10.543 "params": { 00:22:10.543 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.543 "allow_any_host": false, 00:22:10.543 "serial_number": "SPDK00000000000001", 00:22:10.543 "model_number": "SPDK bdev Controller", 00:22:10.543 "max_namespaces": 10, 00:22:10.543 "min_cntlid": 1, 00:22:10.543 "max_cntlid": 65519, 00:22:10.543 "ana_reporting": false 00:22:10.543 } 00:22:10.543 }, 00:22:10.543 { 00:22:10.543 "method": "nvmf_subsystem_add_host", 00:22:10.543 "params": { 00:22:10.543 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.543 "host": "nqn.2016-06.io.spdk:host1", 00:22:10.543 "psk": "/tmp/tmp.Pibd66V8Il" 00:22:10.543 } 00:22:10.543 }, 00:22:10.543 { 00:22:10.543 "method": "nvmf_subsystem_add_ns", 00:22:10.543 "params": { 00:22:10.543 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.543 "namespace": { 00:22:10.543 "nsid": 1, 00:22:10.543 "bdev_name": "malloc0", 00:22:10.543 "nguid": "396FF5F3966E4F748953BE3C625394B6", 00:22:10.543 "uuid": "396ff5f3-966e-4f74-8953-be3c625394b6", 00:22:10.543 "no_auto_visible": false 00:22:10.543 } 00:22:10.543 } 00:22:10.543 }, 00:22:10.543 { 00:22:10.543 "method": "nvmf_subsystem_add_listener", 00:22:10.543 "params": { 00:22:10.543 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.543 "listen_address": { 00:22:10.543 "trtype": "TCP", 00:22:10.543 "adrfam": "IPv4", 00:22:10.543 "traddr": "10.0.0.2", 00:22:10.543 "trsvcid": "4420" 00:22:10.543 }, 00:22:10.543 "secure_channel": true 00:22:10.543 } 00:22:10.543 } 00:22:10.543 ] 00:22:10.543 } 00:22:10.543 ] 00:22:10.543 }' 00:22:10.543 20:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:10.543 20:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.543 20:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2083848 00:22:10.543 20:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:10.543 20:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2083848 00:22:10.543 20:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2083848 ']' 00:22:10.543 20:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.543 20:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:10.543 20:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.543 20:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:10.543 20:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.802 [2024-07-24 20:16:14.331513] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:22:10.802 [2024-07-24 20:16:14.331614] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.802 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.802 [2024-07-24 20:16:14.425037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.802 [2024-07-24 20:16:14.563113] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.802 [2024-07-24 20:16:14.563186] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.802 [2024-07-24 20:16:14.563206] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:10.802 [2024-07-24 20:16:14.563222] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:10.802 [2024-07-24 20:16:14.563237] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.802 [2024-07-24 20:16:14.563348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.060 [2024-07-24 20:16:14.813218] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.060 [2024-07-24 20:16:14.837505] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:11.318 [2024-07-24 20:16:14.853582] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:11.318 [2024-07-24 20:16:14.853873] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.884 20:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:11.884 20:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:11.884 20:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:11.884 20:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:11.884 20:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.884 20:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.884 20:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2084061 00:22:11.884 20:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2084061 /var/tmp/bdevperf.sock 00:22:11.884 20:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2084061 ']' 00:22:11.884 20:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:11.884 20:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:11.884 20:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:11.884 20:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:11.884 "subsystems": [ 00:22:11.884 { 00:22:11.884 "subsystem": "keyring", 00:22:11.884 "config": [] 00:22:11.884 }, 00:22:11.884 { 00:22:11.884 "subsystem": "iobuf", 00:22:11.884 "config": [ 00:22:11.884 { 00:22:11.884 "method": "iobuf_set_options", 00:22:11.884 "params": { 00:22:11.884 "small_pool_count": 8192, 00:22:11.884 "large_pool_count": 1024, 00:22:11.884 "small_bufsize": 8192, 00:22:11.884 "large_bufsize": 135168 00:22:11.884 } 00:22:11.884 } 00:22:11.884 ] 00:22:11.884 }, 00:22:11.884 { 00:22:11.884 "subsystem": "sock", 00:22:11.884 "config": [ 00:22:11.884 { 00:22:11.884 "method": "sock_set_default_impl", 00:22:11.884 "params": { 00:22:11.884 "impl_name": "posix" 00:22:11.884 } 00:22:11.884 }, 00:22:11.884 { 00:22:11.884 "method": "sock_impl_set_options", 00:22:11.884 "params": { 00:22:11.884 "impl_name": "ssl", 00:22:11.884 "recv_buf_size": 4096, 00:22:11.884 "send_buf_size": 4096, 00:22:11.884 "enable_recv_pipe": true, 00:22:11.884 "enable_quickack": false, 00:22:11.884 "enable_placement_id": 0, 00:22:11.884 "enable_zerocopy_send_server": true, 00:22:11.884 "enable_zerocopy_send_client": false, 00:22:11.884 "zerocopy_threshold": 0, 00:22:11.884 "tls_version": 0, 00:22:11.884 "enable_ktls": false 00:22:11.884 } 00:22:11.884 }, 00:22:11.884 { 00:22:11.884 "method": "sock_impl_set_options", 00:22:11.884 "params": { 00:22:11.884 "impl_name": "posix", 00:22:11.884 "recv_buf_size": 2097152, 00:22:11.884 "send_buf_size": 2097152, 00:22:11.884 "enable_recv_pipe": true, 00:22:11.884 "enable_quickack": false, 00:22:11.884 "enable_placement_id": 0, 00:22:11.884 "enable_zerocopy_send_server": true, 00:22:11.884 "enable_zerocopy_send_client": false, 00:22:11.884 "zerocopy_threshold": 0, 00:22:11.884 "tls_version": 0, 00:22:11.884 "enable_ktls": false 00:22:11.884 } 00:22:11.884 } 00:22:11.884 ] 00:22:11.884 }, 00:22:11.884 { 00:22:11.884 "subsystem": "vmd", 00:22:11.884 "config": [] 00:22:11.884 }, 00:22:11.884 { 00:22:11.884 "subsystem": "accel", 00:22:11.884 "config": [ 00:22:11.884 { 00:22:11.884 "method": "accel_set_options", 00:22:11.884 "params": { 00:22:11.884 "small_cache_size": 128, 00:22:11.884 "large_cache_size": 16, 00:22:11.884 "task_count": 2048, 00:22:11.884 "sequence_count": 2048, 00:22:11.884 "buf_count": 2048 00:22:11.884 } 00:22:11.884 } 00:22:11.884 ] 00:22:11.884 }, 00:22:11.884 { 00:22:11.884 "subsystem": "bdev", 00:22:11.884 "config": [ 00:22:11.884 { 00:22:11.884 "method": "bdev_set_options", 00:22:11.884 "params": { 00:22:11.884 "bdev_io_pool_size": 65535, 00:22:11.884 "bdev_io_cache_size": 256, 00:22:11.884 "bdev_auto_examine": true, 00:22:11.884 "iobuf_small_cache_size": 128, 00:22:11.884 "iobuf_large_cache_size": 16 00:22:11.884 } 00:22:11.884 }, 00:22:11.884 { 00:22:11.884 "method": "bdev_raid_set_options", 00:22:11.884 "params": { 00:22:11.884 "process_window_size_kb": 1024, 00:22:11.884 "process_max_bandwidth_mb_sec": 0 00:22:11.884 } 00:22:11.884 }, 00:22:11.884 { 00:22:11.884 "method": "bdev_iscsi_set_options", 00:22:11.884 "params": { 00:22:11.884 "timeout_sec": 30 00:22:11.884 } 00:22:11.884 }, 00:22:11.884 { 00:22:11.884 "method": "bdev_nvme_set_options", 00:22:11.884 "params": { 00:22:11.884 "action_on_timeout": "none", 00:22:11.884 "timeout_us": 0, 00:22:11.884 "timeout_admin_us": 0, 00:22:11.884 "keep_alive_timeout_ms": 10000, 00:22:11.884 "arbitration_burst": 0, 00:22:11.884 "low_priority_weight": 0, 00:22:11.884 "medium_priority_weight": 0, 00:22:11.884 "high_priority_weight": 0, 00:22:11.884 "nvme_adminq_poll_period_us": 10000, 00:22:11.884 "nvme_ioq_poll_period_us": 0, 00:22:11.884 "io_queue_requests": 512, 00:22:11.884 "delay_cmd_submit": true, 00:22:11.884 "transport_retry_count": 4, 00:22:11.884 "bdev_retry_count": 3, 00:22:11.884 "transport_ack_timeout": 0, 00:22:11.884 "ctrlr_loss_timeout_sec": 0, 00:22:11.884 "reconnect_delay_sec": 0, 00:22:11.884 "fast_io_fail_timeout_sec": 0, 00:22:11.884 "disable_auto_failback": false, 00:22:11.884 "generate_uuids": false, 00:22:11.884 "transport_tos": 0, 00:22:11.884 "nvme_error_stat": false, 00:22:11.884 "rdma_srq_size": 0, 00:22:11.884 "io_path_stat": false, 00:22:11.884 "allow_accel_sequence": false, 00:22:11.884 "rdma_max_cq_size": 0, 00:22:11.884 "rdma_cm_event_timeout_ms": 0, 00:22:11.884 "dhchap_digests": [ 00:22:11.884 "sha256", 00:22:11.884 "sha384", 00:22:11.884 "sha512" 00:22:11.884 ], 00:22:11.884 "dhchap_dhgroups": [ 00:22:11.884 "null", 00:22:11.884 "ffdhe2048", 00:22:11.884 "ffdhe3072", 00:22:11.884 "ffdhe4096", 00:22:11.884 "ffdhe6144", 00:22:11.884 "ffdhe8192" 00:22:11.884 ] 00:22:11.884 } 00:22:11.884 }, 00:22:11.884 { 00:22:11.884 "method": "bdev_nvme_attach_controller", 00:22:11.884 "params": { 00:22:11.884 "name": "TLSTEST", 00:22:11.884 "trtype": "TCP", 00:22:11.884 "adrfam": "IPv4", 00:22:11.884 "traddr": "10.0.0.2", 00:22:11.884 "trsvcid": "4420", 00:22:11.884 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:11.884 "prchk_reftag": false, 00:22:11.884 "prchk_guard": false, 00:22:11.884 "ctrlr_loss_timeout_sec": 0, 00:22:11.884 "reconnect_delay_sec": 0, 00:22:11.884 "fast_io_fail_timeout_sec": 0, 00:22:11.884 "psk": "/tmp/tmp.Pibd66V8Il", 00:22:11.884 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:11.884 "hdgst": false, 00:22:11.884 "ddgst": false 00:22:11.884 } 00:22:11.884 }, 00:22:11.884 { 00:22:11.884 "method": "bdev_nvme_set_hotplug", 00:22:11.884 "params": { 00:22:11.884 "period_us": 100000, 00:22:11.884 "enable": false 00:22:11.884 } 00:22:11.884 }, 00:22:11.884 { 00:22:11.884 "method": "bdev_wait_for_examine" 00:22:11.884 } 00:22:11.884 ] 00:22:11.884 }, 00:22:11.884 { 00:22:11.885 "subsystem": "nbd", 00:22:11.885 "config": [] 00:22:11.885 } 00:22:11.885 ] 00:22:11.885 }' 00:22:11.885 20:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:11.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:11.885 20:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:11.885 20:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.885 [2024-07-24 20:16:15.615591] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:22:11.885 [2024-07-24 20:16:15.615761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2084061 ] 00:22:12.143 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.143 [2024-07-24 20:16:15.732004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.143 [2024-07-24 20:16:15.874353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.401 [2024-07-24 20:16:16.067350] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:12.401 [2024-07-24 20:16:16.067561] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:12.659 20:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:12.659 20:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:12.659 20:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:12.659 Running I/O for 10 seconds... 00:22:22.661 00:22:22.661 Latency(us) 00:22:22.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.661 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:22.661 Verification LBA range: start 0x0 length 0x2000 00:22:22.661 TLSTESTn1 : 10.03 2638.34 10.31 0.00 0.00 48412.27 11068.30 48545.19 00:22:22.661 =================================================================================================================== 00:22:22.661 Total : 2638.34 10.31 0.00 0.00 48412.27 11068.30 48545.19 00:22:22.661 0 00:22:22.661 20:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:22.661 20:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 2084061 00:22:22.661 20:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2084061 ']' 00:22:22.661 20:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2084061 00:22:22.661 20:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:22.661 20:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:22.661 20:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2084061 00:22:22.661 20:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:22.661 20:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:22.661 20:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2084061' 00:22:22.661 killing process with pid 2084061 00:22:22.661 20:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2084061 00:22:22.661 Received shutdown signal, test time was about 10.000000 seconds 00:22:22.661 00:22:22.661 Latency(us) 00:22:22.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.661 =================================================================================================================== 00:22:22.661 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:22.661 [2024-07-24 20:16:26.425776] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:22.661 20:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2084061 00:22:23.259 20:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 2083848 00:22:23.259 20:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2083848 ']' 00:22:23.259 20:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2083848 00:22:23.259 20:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:23.259 20:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:23.259 20:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2083848 00:22:23.259 20:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:23.259 20:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:23.259 20:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2083848' 00:22:23.259 killing process with pid 2083848 00:22:23.259 20:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2083848 00:22:23.259 [2024-07-24 20:16:26.787619] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:23.259 20:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2083848 00:22:23.517 20:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:23.517 20:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:23.517 20:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:23.517 20:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.517 20:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2085382 00:22:23.517 20:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:23.517 20:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2085382 00:22:23.517 20:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2085382 ']' 00:22:23.517 20:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.517 20:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:23.517 20:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.517 20:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:23.517 20:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.517 [2024-07-24 20:16:27.192941] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:22:23.517 [2024-07-24 20:16:27.193043] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.517 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.777 [2024-07-24 20:16:27.303617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.777 [2024-07-24 20:16:27.507243] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.777 [2024-07-24 20:16:27.507355] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.777 [2024-07-24 20:16:27.507392] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.777 [2024-07-24 20:16:27.507422] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.777 [2024-07-24 20:16:27.507469] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.777 [2024-07-24 20:16:27.507555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.713 20:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:24.713 20:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:24.713 20:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:24.713 20:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:24.713 20:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.713 20:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.713 20:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.Pibd66V8Il 00:22:24.713 20:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Pibd66V8Il 00:22:24.713 20:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:24.971 [2024-07-24 20:16:28.702547] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.971 20:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:25.538 20:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:26.105 [2024-07-24 20:16:29.782287] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:26.105 [2024-07-24 20:16:29.782781] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.105 20:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:26.671 malloc0 00:22:26.671 20:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:26.930 20:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Pibd66V8Il 00:22:27.495 [2024-07-24 20:16:30.994748] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:27.495 20:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2085804 00:22:27.495 20:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:27.495 20:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:27.495 20:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2085804 /var/tmp/bdevperf.sock 00:22:27.495 20:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2085804 ']' 00:22:27.495 20:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:27.495 20:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:27.495 20:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:27.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:27.495 20:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:27.495 20:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.495 [2024-07-24 20:16:31.075890] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:22:27.495 [2024-07-24 20:16:31.075985] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085804 ] 00:22:27.495 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.495 [2024-07-24 20:16:31.156443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.752 [2024-07-24 20:16:31.297520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.011 20:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:28.011 20:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:28.011 20:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Pibd66V8Il 00:22:28.269 20:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:28.526 [2024-07-24 20:16:32.224086] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:28.784 nvme0n1 00:22:28.784 20:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:28.784 Running I/O for 1 seconds... 00:22:30.156 00:22:30.156 Latency(us) 00:22:30.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.156 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:30.156 Verification LBA range: start 0x0 length 0x2000 00:22:30.156 nvme0n1 : 1.03 2586.47 10.10 0.00 0.00 48797.03 8058.50 43302.31 00:22:30.156 =================================================================================================================== 00:22:30.156 Total : 2586.47 10.10 0.00 0.00 48797.03 8058.50 43302.31 00:22:30.156 0 00:22:30.156 20:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 2085804 00:22:30.156 20:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2085804 ']' 00:22:30.156 20:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2085804 00:22:30.156 20:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:30.156 20:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:30.156 20:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2085804 00:22:30.156 20:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:30.156 20:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:30.156 20:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2085804' 00:22:30.156 killing process with pid 2085804 00:22:30.156 20:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2085804 00:22:30.156 Received shutdown signal, test time was about 1.000000 seconds 00:22:30.156 00:22:30.156 Latency(us) 00:22:30.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.156 =================================================================================================================== 00:22:30.156 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:30.156 20:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2085804 00:22:30.414 20:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 2085382 00:22:30.414 20:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2085382 ']' 00:22:30.414 20:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2085382 00:22:30.414 20:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:30.414 20:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:30.415 20:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2085382 00:22:30.415 20:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:30.415 20:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:30.415 20:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2085382' 00:22:30.415 killing process with pid 2085382 00:22:30.415 20:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2085382 00:22:30.415 [2024-07-24 20:16:34.016011] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:30.415 20:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2085382 00:22:30.673 20:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:22:30.673 20:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:30.673 20:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:30.673 20:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.673 20:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2086210 00:22:30.673 20:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:30.673 20:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2086210 00:22:30.673 20:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2086210 ']' 00:22:30.673 20:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.673 20:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:30.673 20:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.673 20:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:30.673 20:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.932 [2024-07-24 20:16:34.509333] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:22:30.932 [2024-07-24 20:16:34.509461] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.932 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.932 [2024-07-24 20:16:34.596497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.189 [2024-07-24 20:16:34.735608] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.189 [2024-07-24 20:16:34.735668] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.189 [2024-07-24 20:16:34.735687] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.189 [2024-07-24 20:16:34.735702] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.189 [2024-07-24 20:16:34.735716] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.189 [2024-07-24 20:16:34.735770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.447 20:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:31.447 20:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:31.447 20:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:31.447 20:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:31.447 20:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.447 20:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.447 20:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:22:31.447 20:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.447 20:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.447 [2024-07-24 20:16:35.081711] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.447 malloc0 00:22:31.447 [2024-07-24 20:16:35.115284] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:31.447 [2024-07-24 20:16:35.133650] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.447 20:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.447 20:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=2086255 00:22:31.447 20:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:31.447 20:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 2086255 /var/tmp/bdevperf.sock 00:22:31.447 20:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2086255 ']' 00:22:31.447 20:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.447 20:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:31.447 20:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.447 20:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:31.447 20:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.447 [2024-07-24 20:16:35.209414] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:22:31.447 [2024-07-24 20:16:35.209517] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2086255 ] 00:22:31.705 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.705 [2024-07-24 20:16:35.286228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.705 [2024-07-24 20:16:35.426349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.962 20:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:31.962 20:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:31.962 20:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Pibd66V8Il 00:22:32.220 20:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:32.478 [2024-07-24 20:16:36.133092] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:32.478 nvme0n1 00:22:32.478 20:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:32.736 Running I/O for 1 seconds... 00:22:33.670 00:22:33.670 Latency(us) 00:22:33.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.670 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:33.670 Verification LBA range: start 0x0 length 0x2000 00:22:33.670 nvme0n1 : 1.03 2669.34 10.43 0.00 0.00 47320.06 8301.23 49321.91 00:22:33.670 =================================================================================================================== 00:22:33.670 Total : 2669.34 10.43 0.00 0.00 47320.06 8301.23 49321.91 00:22:33.670 0 00:22:33.670 20:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:22:33.670 20:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.670 20:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.928 20:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.928 20:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:22:33.928 "subsystems": [ 00:22:33.928 { 00:22:33.928 "subsystem": "keyring", 00:22:33.928 "config": [ 00:22:33.928 { 00:22:33.928 "method": "keyring_file_add_key", 00:22:33.928 "params": { 00:22:33.928 "name": "key0", 00:22:33.928 "path": "/tmp/tmp.Pibd66V8Il" 00:22:33.928 } 00:22:33.928 } 00:22:33.928 ] 00:22:33.928 }, 00:22:33.928 { 00:22:33.928 "subsystem": "iobuf", 00:22:33.928 "config": [ 00:22:33.928 { 00:22:33.928 "method": "iobuf_set_options", 00:22:33.928 "params": { 00:22:33.928 "small_pool_count": 8192, 00:22:33.928 "large_pool_count": 1024, 00:22:33.928 "small_bufsize": 8192, 00:22:33.928 "large_bufsize": 135168 00:22:33.928 } 00:22:33.928 } 00:22:33.928 ] 00:22:33.928 }, 00:22:33.928 { 00:22:33.928 "subsystem": "sock", 00:22:33.928 "config": [ 00:22:33.928 { 00:22:33.928 "method": "sock_set_default_impl", 00:22:33.928 "params": { 00:22:33.928 "impl_name": "posix" 00:22:33.928 } 00:22:33.928 }, 00:22:33.928 { 00:22:33.928 "method": "sock_impl_set_options", 00:22:33.928 "params": { 00:22:33.928 "impl_name": "ssl", 00:22:33.928 "recv_buf_size": 4096, 00:22:33.928 "send_buf_size": 4096, 00:22:33.928 "enable_recv_pipe": true, 00:22:33.928 "enable_quickack": false, 00:22:33.928 "enable_placement_id": 0, 00:22:33.928 "enable_zerocopy_send_server": true, 00:22:33.928 "enable_zerocopy_send_client": false, 00:22:33.928 "zerocopy_threshold": 0, 00:22:33.928 "tls_version": 0, 00:22:33.928 "enable_ktls": false 00:22:33.928 } 00:22:33.928 }, 00:22:33.928 { 00:22:33.928 "method": "sock_impl_set_options", 00:22:33.928 "params": { 00:22:33.928 "impl_name": "posix", 00:22:33.928 "recv_buf_size": 2097152, 00:22:33.928 "send_buf_size": 2097152, 00:22:33.928 "enable_recv_pipe": true, 00:22:33.928 "enable_quickack": false, 00:22:33.928 "enable_placement_id": 0, 00:22:33.928 "enable_zerocopy_send_server": true, 00:22:33.928 "enable_zerocopy_send_client": false, 00:22:33.928 "zerocopy_threshold": 0, 00:22:33.928 "tls_version": 0, 00:22:33.929 "enable_ktls": false 00:22:33.929 } 00:22:33.929 } 00:22:33.929 ] 00:22:33.929 }, 00:22:33.929 { 00:22:33.929 "subsystem": "vmd", 00:22:33.929 "config": [] 00:22:33.929 }, 00:22:33.929 { 00:22:33.929 "subsystem": "accel", 00:22:33.929 "config": [ 00:22:33.929 { 00:22:33.929 "method": "accel_set_options", 00:22:33.929 "params": { 00:22:33.929 "small_cache_size": 128, 00:22:33.929 "large_cache_size": 16, 00:22:33.929 "task_count": 2048, 00:22:33.929 "sequence_count": 2048, 00:22:33.929 "buf_count": 2048 00:22:33.929 } 00:22:33.929 } 00:22:33.929 ] 00:22:33.929 }, 00:22:33.929 { 00:22:33.929 "subsystem": "bdev", 00:22:33.929 "config": [ 00:22:33.929 { 00:22:33.929 "method": "bdev_set_options", 00:22:33.929 "params": { 00:22:33.929 "bdev_io_pool_size": 65535, 00:22:33.929 "bdev_io_cache_size": 256, 00:22:33.929 "bdev_auto_examine": true, 00:22:33.929 "iobuf_small_cache_size": 128, 00:22:33.929 "iobuf_large_cache_size": 16 00:22:33.929 } 00:22:33.929 }, 00:22:33.929 { 00:22:33.929 "method": "bdev_raid_set_options", 00:22:33.929 "params": { 00:22:33.929 "process_window_size_kb": 1024, 00:22:33.929 "process_max_bandwidth_mb_sec": 0 00:22:33.929 } 00:22:33.929 }, 00:22:33.929 { 00:22:33.929 "method": "bdev_iscsi_set_options", 00:22:33.929 "params": { 00:22:33.929 "timeout_sec": 30 00:22:33.929 } 00:22:33.929 }, 00:22:33.929 { 00:22:33.929 "method": "bdev_nvme_set_options", 00:22:33.929 "params": { 00:22:33.929 "action_on_timeout": "none", 00:22:33.929 "timeout_us": 0, 00:22:33.929 "timeout_admin_us": 0, 00:22:33.929 "keep_alive_timeout_ms": 10000, 00:22:33.929 "arbitration_burst": 0, 00:22:33.929 "low_priority_weight": 0, 00:22:33.929 "medium_priority_weight": 0, 00:22:33.929 "high_priority_weight": 0, 00:22:33.929 "nvme_adminq_poll_period_us": 10000, 00:22:33.929 "nvme_ioq_poll_period_us": 0, 00:22:33.929 "io_queue_requests": 0, 00:22:33.929 "delay_cmd_submit": true, 00:22:33.929 "transport_retry_count": 4, 00:22:33.929 "bdev_retry_count": 3, 00:22:33.929 "transport_ack_timeout": 0, 00:22:33.929 "ctrlr_loss_timeout_sec": 0, 00:22:33.929 "reconnect_delay_sec": 0, 00:22:33.929 "fast_io_fail_timeout_sec": 0, 00:22:33.929 "disable_auto_failback": false, 00:22:33.929 "generate_uuids": false, 00:22:33.929 "transport_tos": 0, 00:22:33.929 "nvme_error_stat": false, 00:22:33.929 "rdma_srq_size": 0, 00:22:33.929 "io_path_stat": false, 00:22:33.929 "allow_accel_sequence": false, 00:22:33.929 "rdma_max_cq_size": 0, 00:22:33.929 "rdma_cm_event_timeout_ms": 0, 00:22:33.929 "dhchap_digests": [ 00:22:33.929 "sha256", 00:22:33.929 "sha384", 00:22:33.929 "sha512" 00:22:33.929 ], 00:22:33.929 "dhchap_dhgroups": [ 00:22:33.929 "null", 00:22:33.929 "ffdhe2048", 00:22:33.929 "ffdhe3072", 00:22:33.929 "ffdhe4096", 00:22:33.929 "ffdhe6144", 00:22:33.929 "ffdhe8192" 00:22:33.929 ] 00:22:33.929 } 00:22:33.929 }, 00:22:33.929 { 00:22:33.929 "method": "bdev_nvme_set_hotplug", 00:22:33.929 "params": { 00:22:33.929 "period_us": 100000, 00:22:33.929 "enable": false 00:22:33.929 } 00:22:33.929 }, 00:22:33.929 { 00:22:33.929 "method": "bdev_malloc_create", 00:22:33.929 "params": { 00:22:33.929 "name": "malloc0", 00:22:33.929 "num_blocks": 8192, 00:22:33.929 "block_size": 4096, 00:22:33.929 "physical_block_size": 4096, 00:22:33.929 "uuid": "b042e7e6-d14a-45d3-a531-ff71962648d3", 00:22:33.929 "optimal_io_boundary": 0, 00:22:33.929 "md_size": 0, 00:22:33.929 "dif_type": 0, 00:22:33.929 "dif_is_head_of_md": false, 00:22:33.929 "dif_pi_format": 0 00:22:33.929 } 00:22:33.929 }, 00:22:33.929 { 00:22:33.929 "method": "bdev_wait_for_examine" 00:22:33.929 } 00:22:33.929 ] 00:22:33.929 }, 00:22:33.929 { 00:22:33.929 "subsystem": "nbd", 00:22:33.929 "config": [] 00:22:33.929 }, 00:22:33.929 { 00:22:33.929 "subsystem": "scheduler", 00:22:33.929 "config": [ 00:22:33.929 { 00:22:33.929 "method": "framework_set_scheduler", 00:22:33.929 "params": { 00:22:33.929 "name": "static" 00:22:33.929 } 00:22:33.929 } 00:22:33.929 ] 00:22:33.929 }, 00:22:33.929 { 00:22:33.929 "subsystem": "nvmf", 00:22:33.929 "config": [ 00:22:33.929 { 00:22:33.929 "method": "nvmf_set_config", 00:22:33.929 "params": { 00:22:33.929 "discovery_filter": "match_any", 00:22:33.929 "admin_cmd_passthru": { 00:22:33.929 "identify_ctrlr": false 00:22:33.929 } 00:22:33.929 } 00:22:33.929 }, 00:22:33.929 { 00:22:33.929 "method": "nvmf_set_max_subsystems", 00:22:33.929 "params": { 00:22:33.929 "max_subsystems": 1024 00:22:33.929 } 00:22:33.929 }, 00:22:33.929 { 00:22:33.929 "method": "nvmf_set_crdt", 00:22:33.929 "params": { 00:22:33.929 "crdt1": 0, 00:22:33.929 "crdt2": 0, 00:22:33.929 "crdt3": 0 00:22:33.929 } 00:22:33.929 }, 00:22:33.929 { 00:22:33.929 "method": "nvmf_create_transport", 00:22:33.929 "params": { 00:22:33.929 "trtype": "TCP", 00:22:33.929 "max_queue_depth": 128, 00:22:33.929 "max_io_qpairs_per_ctrlr": 127, 00:22:33.929 "in_capsule_data_size": 4096, 00:22:33.929 "max_io_size": 131072, 00:22:33.929 "io_unit_size": 131072, 00:22:33.929 "max_aq_depth": 128, 00:22:33.929 "num_shared_buffers": 511, 00:22:33.929 "buf_cache_size": 4294967295, 00:22:33.929 "dif_insert_or_strip": false, 00:22:33.929 "zcopy": false, 00:22:33.929 "c2h_success": false, 00:22:33.929 "sock_priority": 0, 00:22:33.929 "abort_timeout_sec": 1, 00:22:33.929 "ack_timeout": 0, 00:22:33.929 "data_wr_pool_size": 0 00:22:33.929 } 00:22:33.929 }, 00:22:33.929 { 00:22:33.929 "method": "nvmf_create_subsystem", 00:22:33.929 "params": { 00:22:33.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.929 "allow_any_host": false, 00:22:33.929 "serial_number": "00000000000000000000", 00:22:33.929 "model_number": "SPDK bdev Controller", 00:22:33.929 "max_namespaces": 32, 00:22:33.929 "min_cntlid": 1, 00:22:33.929 "max_cntlid": 65519, 00:22:33.929 "ana_reporting": false 00:22:33.929 } 00:22:33.929 }, 00:22:33.929 { 00:22:33.929 "method": "nvmf_subsystem_add_host", 00:22:33.929 "params": { 00:22:33.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.929 "host": "nqn.2016-06.io.spdk:host1", 00:22:33.929 "psk": "key0" 00:22:33.929 } 00:22:33.929 }, 00:22:33.929 { 00:22:33.929 "method": "nvmf_subsystem_add_ns", 00:22:33.929 "params": { 00:22:33.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.929 "namespace": { 00:22:33.929 "nsid": 1, 00:22:33.929 "bdev_name": "malloc0", 00:22:33.929 "nguid": "B042E7E6D14A45D3A531FF71962648D3", 00:22:33.929 "uuid": "b042e7e6-d14a-45d3-a531-ff71962648d3", 00:22:33.929 "no_auto_visible": false 00:22:33.929 } 00:22:33.929 } 00:22:33.929 }, 00:22:33.929 { 00:22:33.929 "method": "nvmf_subsystem_add_listener", 00:22:33.929 "params": { 00:22:33.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.929 "listen_address": { 00:22:33.929 "trtype": "TCP", 00:22:33.929 "adrfam": "IPv4", 00:22:33.929 "traddr": "10.0.0.2", 00:22:33.929 "trsvcid": "4420" 00:22:33.929 }, 00:22:33.929 "secure_channel": false, 00:22:33.929 "sock_impl": "ssl" 00:22:33.929 } 00:22:33.929 } 00:22:33.929 ] 00:22:33.929 } 00:22:33.929 ] 00:22:33.929 }' 00:22:33.929 20:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:34.496 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:22:34.496 "subsystems": [ 00:22:34.496 { 00:22:34.496 "subsystem": "keyring", 00:22:34.496 "config": [ 00:22:34.496 { 00:22:34.496 "method": "keyring_file_add_key", 00:22:34.496 "params": { 00:22:34.496 "name": "key0", 00:22:34.496 "path": "/tmp/tmp.Pibd66V8Il" 00:22:34.496 } 00:22:34.496 } 00:22:34.496 ] 00:22:34.496 }, 00:22:34.496 { 00:22:34.496 "subsystem": "iobuf", 00:22:34.496 "config": [ 00:22:34.496 { 00:22:34.496 "method": "iobuf_set_options", 00:22:34.496 "params": { 00:22:34.496 "small_pool_count": 8192, 00:22:34.496 "large_pool_count": 1024, 00:22:34.496 "small_bufsize": 8192, 00:22:34.496 "large_bufsize": 135168 00:22:34.496 } 00:22:34.496 } 00:22:34.496 ] 00:22:34.496 }, 00:22:34.496 { 00:22:34.496 "subsystem": "sock", 00:22:34.496 "config": [ 00:22:34.496 { 00:22:34.496 "method": "sock_set_default_impl", 00:22:34.496 "params": { 00:22:34.496 "impl_name": "posix" 00:22:34.496 } 00:22:34.496 }, 00:22:34.496 { 00:22:34.496 "method": "sock_impl_set_options", 00:22:34.496 "params": { 00:22:34.496 "impl_name": "ssl", 00:22:34.496 "recv_buf_size": 4096, 00:22:34.496 "send_buf_size": 4096, 00:22:34.496 "enable_recv_pipe": true, 00:22:34.496 "enable_quickack": false, 00:22:34.496 "enable_placement_id": 0, 00:22:34.496 "enable_zerocopy_send_server": true, 00:22:34.496 "enable_zerocopy_send_client": false, 00:22:34.496 "zerocopy_threshold": 0, 00:22:34.496 "tls_version": 0, 00:22:34.496 "enable_ktls": false 00:22:34.496 } 00:22:34.496 }, 00:22:34.496 { 00:22:34.496 "method": "sock_impl_set_options", 00:22:34.496 "params": { 00:22:34.496 "impl_name": "posix", 00:22:34.496 "recv_buf_size": 2097152, 00:22:34.496 "send_buf_size": 2097152, 00:22:34.496 "enable_recv_pipe": true, 00:22:34.496 "enable_quickack": false, 00:22:34.496 "enable_placement_id": 0, 00:22:34.496 "enable_zerocopy_send_server": true, 00:22:34.496 "enable_zerocopy_send_client": false, 00:22:34.496 "zerocopy_threshold": 0, 00:22:34.496 "tls_version": 0, 00:22:34.496 "enable_ktls": false 00:22:34.496 } 00:22:34.496 } 00:22:34.496 ] 00:22:34.496 }, 00:22:34.496 { 00:22:34.496 "subsystem": "vmd", 00:22:34.496 "config": [] 00:22:34.496 }, 00:22:34.496 { 00:22:34.496 "subsystem": "accel", 00:22:34.496 "config": [ 00:22:34.496 { 00:22:34.496 "method": "accel_set_options", 00:22:34.496 "params": { 00:22:34.496 "small_cache_size": 128, 00:22:34.496 "large_cache_size": 16, 00:22:34.496 "task_count": 2048, 00:22:34.496 "sequence_count": 2048, 00:22:34.496 "buf_count": 2048 00:22:34.496 } 00:22:34.496 } 00:22:34.496 ] 00:22:34.496 }, 00:22:34.496 { 00:22:34.496 "subsystem": "bdev", 00:22:34.496 "config": [ 00:22:34.496 { 00:22:34.496 "method": "bdev_set_options", 00:22:34.496 "params": { 00:22:34.496 "bdev_io_pool_size": 65535, 00:22:34.496 "bdev_io_cache_size": 256, 00:22:34.496 "bdev_auto_examine": true, 00:22:34.496 "iobuf_small_cache_size": 128, 00:22:34.496 "iobuf_large_cache_size": 16 00:22:34.496 } 00:22:34.496 }, 00:22:34.496 { 00:22:34.496 "method": "bdev_raid_set_options", 00:22:34.496 "params": { 00:22:34.496 "process_window_size_kb": 1024, 00:22:34.496 "process_max_bandwidth_mb_sec": 0 00:22:34.496 } 00:22:34.496 }, 00:22:34.496 { 00:22:34.496 "method": "bdev_iscsi_set_options", 00:22:34.496 "params": { 00:22:34.496 "timeout_sec": 30 00:22:34.496 } 00:22:34.496 }, 00:22:34.496 { 00:22:34.496 "method": "bdev_nvme_set_options", 00:22:34.496 "params": { 00:22:34.496 "action_on_timeout": "none", 00:22:34.496 "timeout_us": 0, 00:22:34.496 "timeout_admin_us": 0, 00:22:34.496 "keep_alive_timeout_ms": 10000, 00:22:34.496 "arbitration_burst": 0, 00:22:34.496 "low_priority_weight": 0, 00:22:34.496 "medium_priority_weight": 0, 00:22:34.496 "high_priority_weight": 0, 00:22:34.496 "nvme_adminq_poll_period_us": 10000, 00:22:34.496 "nvme_ioq_poll_period_us": 0, 00:22:34.496 "io_queue_requests": 512, 00:22:34.496 "delay_cmd_submit": true, 00:22:34.496 "transport_retry_count": 4, 00:22:34.496 "bdev_retry_count": 3, 00:22:34.496 "transport_ack_timeout": 0, 00:22:34.496 "ctrlr_loss_timeout_sec": 0, 00:22:34.496 "reconnect_delay_sec": 0, 00:22:34.496 "fast_io_fail_timeout_sec": 0, 00:22:34.496 "disable_auto_failback": false, 00:22:34.496 "generate_uuids": false, 00:22:34.496 "transport_tos": 0, 00:22:34.496 "nvme_error_stat": false, 00:22:34.496 "rdma_srq_size": 0, 00:22:34.496 "io_path_stat": false, 00:22:34.496 "allow_accel_sequence": false, 00:22:34.496 "rdma_max_cq_size": 0, 00:22:34.496 "rdma_cm_event_timeout_ms": 0, 00:22:34.496 "dhchap_digests": [ 00:22:34.496 "sha256", 00:22:34.496 "sha384", 00:22:34.496 "sha512" 00:22:34.496 ], 00:22:34.496 "dhchap_dhgroups": [ 00:22:34.496 "null", 00:22:34.496 "ffdhe2048", 00:22:34.496 "ffdhe3072", 00:22:34.497 "ffdhe4096", 00:22:34.497 "ffdhe6144", 00:22:34.497 "ffdhe8192" 00:22:34.497 ] 00:22:34.497 } 00:22:34.497 }, 00:22:34.497 { 00:22:34.497 "method": "bdev_nvme_attach_controller", 00:22:34.497 "params": { 00:22:34.497 "name": "nvme0", 00:22:34.497 "trtype": "TCP", 00:22:34.497 "adrfam": "IPv4", 00:22:34.497 "traddr": "10.0.0.2", 00:22:34.497 "trsvcid": "4420", 00:22:34.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:34.497 "prchk_reftag": false, 00:22:34.497 "prchk_guard": false, 00:22:34.497 "ctrlr_loss_timeout_sec": 0, 00:22:34.497 "reconnect_delay_sec": 0, 00:22:34.497 "fast_io_fail_timeout_sec": 0, 00:22:34.497 "psk": "key0", 00:22:34.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:34.497 "hdgst": false, 00:22:34.497 "ddgst": false 00:22:34.497 } 00:22:34.497 }, 00:22:34.497 { 00:22:34.497 "method": "bdev_nvme_set_hotplug", 00:22:34.497 "params": { 00:22:34.497 "period_us": 100000, 00:22:34.497 "enable": false 00:22:34.497 } 00:22:34.497 }, 00:22:34.497 { 00:22:34.497 "method": "bdev_enable_histogram", 00:22:34.497 "params": { 00:22:34.497 "name": "nvme0n1", 00:22:34.497 "enable": true 00:22:34.497 } 00:22:34.497 }, 00:22:34.497 { 00:22:34.497 "method": "bdev_wait_for_examine" 00:22:34.497 } 00:22:34.497 ] 00:22:34.497 }, 00:22:34.497 { 00:22:34.497 "subsystem": "nbd", 00:22:34.497 "config": [] 00:22:34.497 } 00:22:34.497 ] 00:22:34.497 }' 00:22:34.497 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 2086255 00:22:34.497 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2086255 ']' 00:22:34.497 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2086255 00:22:34.497 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:34.497 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:34.497 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2086255 00:22:34.497 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:34.497 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:34.497 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2086255' 00:22:34.497 killing process with pid 2086255 00:22:34.497 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2086255 00:22:34.497 Received shutdown signal, test time was about 1.000000 seconds 00:22:34.497 00:22:34.497 Latency(us) 00:22:34.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.497 =================================================================================================================== 00:22:34.497 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:34.497 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2086255 00:22:34.757 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 2086210 00:22:34.757 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2086210 ']' 00:22:34.757 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2086210 00:22:34.757 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:34.757 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:34.757 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2086210 00:22:35.015 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:35.015 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:35.015 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2086210' 00:22:35.015 killing process with pid 2086210 00:22:35.015 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2086210 00:22:35.015 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2086210 00:22:35.275 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:22:35.275 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:22:35.275 "subsystems": [ 00:22:35.275 { 00:22:35.275 "subsystem": "keyring", 00:22:35.275 "config": [ 00:22:35.275 { 00:22:35.275 "method": "keyring_file_add_key", 00:22:35.275 "params": { 00:22:35.275 "name": "key0", 00:22:35.275 "path": "/tmp/tmp.Pibd66V8Il" 00:22:35.275 } 00:22:35.275 } 00:22:35.275 ] 00:22:35.275 }, 00:22:35.275 { 00:22:35.275 "subsystem": "iobuf", 00:22:35.275 "config": [ 00:22:35.275 { 00:22:35.275 "method": "iobuf_set_options", 00:22:35.275 "params": { 00:22:35.275 "small_pool_count": 8192, 00:22:35.275 "large_pool_count": 1024, 00:22:35.275 "small_bufsize": 8192, 00:22:35.275 "large_bufsize": 135168 00:22:35.275 } 00:22:35.275 } 00:22:35.275 ] 00:22:35.275 }, 00:22:35.275 { 00:22:35.275 "subsystem": "sock", 00:22:35.275 "config": [ 00:22:35.275 { 00:22:35.275 "method": "sock_set_default_impl", 00:22:35.275 "params": { 00:22:35.275 "impl_name": "posix" 00:22:35.275 } 00:22:35.275 }, 00:22:35.275 { 00:22:35.275 "method": "sock_impl_set_options", 00:22:35.275 "params": { 00:22:35.275 "impl_name": "ssl", 00:22:35.275 "recv_buf_size": 4096, 00:22:35.275 "send_buf_size": 4096, 00:22:35.275 "enable_recv_pipe": true, 00:22:35.275 "enable_quickack": false, 00:22:35.275 "enable_placement_id": 0, 00:22:35.275 "enable_zerocopy_send_server": true, 00:22:35.275 "enable_zerocopy_send_client": false, 00:22:35.275 "zerocopy_threshold": 0, 00:22:35.275 "tls_version": 0, 00:22:35.275 "enable_ktls": false 00:22:35.275 } 00:22:35.275 }, 00:22:35.275 { 00:22:35.275 "method": "sock_impl_set_options", 00:22:35.275 "params": { 00:22:35.275 "impl_name": "posix", 00:22:35.275 "recv_buf_size": 2097152, 00:22:35.275 "send_buf_size": 2097152, 00:22:35.275 "enable_recv_pipe": true, 00:22:35.275 "enable_quickack": false, 00:22:35.275 "enable_placement_id": 0, 00:22:35.275 "enable_zerocopy_send_server": true, 00:22:35.275 "enable_zerocopy_send_client": false, 00:22:35.275 "zerocopy_threshold": 0, 00:22:35.275 "tls_version": 0, 00:22:35.275 "enable_ktls": false 00:22:35.275 } 00:22:35.275 } 00:22:35.275 ] 00:22:35.275 }, 00:22:35.275 { 00:22:35.275 "subsystem": "vmd", 00:22:35.275 "config": [] 00:22:35.275 }, 00:22:35.275 { 00:22:35.275 "subsystem": "accel", 00:22:35.275 "config": [ 00:22:35.275 { 00:22:35.275 "method": "accel_set_options", 00:22:35.275 "params": { 00:22:35.275 "small_cache_size": 128, 00:22:35.275 "large_cache_size": 16, 00:22:35.275 "task_count": 2048, 00:22:35.275 "sequence_count": 2048, 00:22:35.275 "buf_count": 2048 00:22:35.275 } 00:22:35.275 } 00:22:35.275 ] 00:22:35.275 }, 00:22:35.275 { 00:22:35.275 "subsystem": "bdev", 00:22:35.275 "config": [ 00:22:35.275 { 00:22:35.275 "method": "bdev_set_options", 00:22:35.275 "params": { 00:22:35.275 "bdev_io_pool_size": 65535, 00:22:35.275 "bdev_io_cache_size": 256, 00:22:35.275 "bdev_auto_examine": true, 00:22:35.275 "iobuf_small_cache_size": 128, 00:22:35.275 "iobuf_large_cache_size": 16 00:22:35.275 } 00:22:35.275 }, 00:22:35.275 { 00:22:35.275 "method": "bdev_raid_set_options", 00:22:35.275 "params": { 00:22:35.275 "process_window_size_kb": 1024, 00:22:35.275 "process_max_bandwidth_mb_sec": 0 00:22:35.275 } 00:22:35.275 }, 00:22:35.275 { 00:22:35.275 "method": "bdev_iscsi_set_options", 00:22:35.275 "params": { 00:22:35.275 "timeout_sec": 30 00:22:35.275 } 00:22:35.275 }, 00:22:35.275 { 00:22:35.275 "method": "bdev_nvme_set_options", 00:22:35.275 "params": { 00:22:35.275 "action_on_timeout": "none", 00:22:35.275 "timeout_us": 0, 00:22:35.275 "timeout_admin_us": 0, 00:22:35.275 "keep_alive_timeout_ms": 10000, 00:22:35.275 "arbitration_burst": 0, 00:22:35.275 "low_priority_weight": 0, 00:22:35.275 "medium_priority_weight": 0, 00:22:35.275 "high_priority_weight": 0, 00:22:35.275 "nvme_adminq_poll_period_us": 10000, 00:22:35.275 "nvme_ioq_poll_period_us": 0, 00:22:35.275 "io_queue_requests": 0, 00:22:35.275 "delay_cmd_submit": true, 00:22:35.275 "transport_retry_count": 4, 00:22:35.275 "bdev_retry_count": 3, 00:22:35.275 "transport_ack_timeout": 0, 00:22:35.275 "ctrlr_loss_timeout_sec": 0, 00:22:35.275 "reconnect_delay_sec": 0, 00:22:35.275 "fast_io_fail_timeout_sec": 0, 00:22:35.275 "disable_auto_failback": false, 00:22:35.276 "generate_uuids": false, 00:22:35.276 "transport_tos": 0, 00:22:35.276 "nvme_error_stat": false, 00:22:35.276 "rdma_srq_size": 0, 00:22:35.276 "io_path_stat": false, 00:22:35.276 "allow_accel_sequence": false, 00:22:35.276 "rdma_max_cq_size": 0, 00:22:35.276 "rdma_cm_event_timeout_ms": 0, 00:22:35.276 "dhchap_digests": [ 00:22:35.276 "sha256", 00:22:35.276 "sha384", 00:22:35.276 "sha512" 00:22:35.276 ], 00:22:35.276 "dhchap_dhgroups": [ 00:22:35.276 "null", 00:22:35.276 "ffdhe2048", 00:22:35.276 "ffdhe3072", 00:22:35.276 "ffdhe4096", 00:22:35.276 "ffdhe6144", 00:22:35.276 "ffdhe8192" 00:22:35.276 ] 00:22:35.276 } 00:22:35.276 }, 00:22:35.276 { 00:22:35.276 "method": "bdev_nvme_set_hotplug", 00:22:35.276 "params": { 00:22:35.276 "period_us": 100000, 00:22:35.276 "enable": false 00:22:35.276 } 00:22:35.276 }, 00:22:35.276 { 00:22:35.276 "method": "bdev_malloc_create", 00:22:35.276 "params": { 00:22:35.276 "name": "malloc0", 00:22:35.276 "num_blocks": 8192, 00:22:35.276 "block_size": 4096, 00:22:35.276 "physical_block_size": 4096, 00:22:35.276 "uuid": "b042e7e6-d14a-45d3-a531-ff71962648d3", 00:22:35.276 "optimal_io_boundary": 0, 00:22:35.276 "md_size": 0, 00:22:35.276 "dif_type": 0, 00:22:35.276 "dif_is_head_of_md": false, 00:22:35.276 "dif_pi_format": 0 00:22:35.276 } 00:22:35.276 }, 00:22:35.276 { 00:22:35.276 "method": "bdev_wait_for_examine" 00:22:35.276 } 00:22:35.276 ] 00:22:35.276 }, 00:22:35.276 { 00:22:35.276 "subsystem": "nbd", 00:22:35.276 "config": [] 00:22:35.276 }, 00:22:35.276 { 00:22:35.276 "subsystem": "scheduler", 00:22:35.276 "config": [ 00:22:35.276 { 00:22:35.276 "method": "framework_set_scheduler", 00:22:35.276 "params": { 00:22:35.276 "name": "static" 00:22:35.276 } 00:22:35.276 } 00:22:35.276 ] 00:22:35.276 }, 00:22:35.276 { 00:22:35.276 "subsystem": "nvmf", 00:22:35.276 "config": [ 00:22:35.276 { 00:22:35.276 "method": "nvmf_set_config", 00:22:35.276 "params": { 00:22:35.276 "discovery_filter": "match_any", 00:22:35.276 "admin_cmd_passthru": { 00:22:35.276 "identify_ctrlr": false 00:22:35.276 } 00:22:35.276 } 00:22:35.276 }, 00:22:35.276 { 00:22:35.276 "method": "nvmf_set_max_subsystems", 00:22:35.276 "params": { 00:22:35.276 "max_subsystems": 1024 00:22:35.276 } 00:22:35.276 }, 00:22:35.276 { 00:22:35.276 "method": "nvmf_set_crdt", 00:22:35.276 "params": { 00:22:35.276 "crdt1": 0, 00:22:35.276 "crdt2": 0, 00:22:35.276 "crdt3": 0 00:22:35.276 } 00:22:35.276 }, 00:22:35.276 { 00:22:35.276 "method": "nvmf_create_transport", 00:22:35.276 "params": { 00:22:35.276 "trtype": "TCP", 00:22:35.276 "max_queue_depth": 128, 00:22:35.276 "max_io_qpairs_per_ctrlr": 127, 00:22:35.276 "in_capsule_data_size": 4096, 00:22:35.276 "max_io_size": 131072, 00:22:35.276 "io_unit_size": 131072, 00:22:35.276 "max_aq_depth": 128, 00:22:35.276 "num_shared_buffers": 511, 00:22:35.276 "buf_cache_size": 4294967295, 00:22:35.276 "dif_insert_or_strip": false, 00:22:35.276 "zcopy": false, 00:22:35.276 "c2h_success": false, 00:22:35.276 "sock_priority": 0, 00:22:35.276 "abort_timeout_sec": 1, 00:22:35.276 "ack_timeout": 0, 00:22:35.276 "data_wr_pool_size": 0 00:22:35.276 } 00:22:35.276 }, 00:22:35.276 { 00:22:35.276 "method": "nvmf_create_subsystem", 00:22:35.276 "params": { 00:22:35.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:35.276 "allow_any_host": false, 00:22:35.276 "serial_number": "00000000000000000000", 00:22:35.276 "model_number": "SPDK bdev Controller", 00:22:35.276 "max_namespaces": 32, 00:22:35.276 "min_cntlid": 1, 00:22:35.276 "max_cntlid": 65519, 00:22:35.276 "ana_reporting": false 00:22:35.276 } 00:22:35.276 }, 00:22:35.276 { 00:22:35.276 "method": "nvmf_subsystem_add_host", 00:22:35.276 "params": { 00:22:35.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:35.276 "host": "nqn.2016-06.io.spdk:host1", 00:22:35.276 "psk": "key0" 00:22:35.276 } 00:22:35.276 }, 00:22:35.276 { 00:22:35.276 "method": "nvmf_subsystem_add_ns", 00:22:35.276 "params": { 00:22:35.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:35.276 "namespace": { 00:22:35.276 "nsid": 1, 00:22:35.276 "bdev_name": "malloc0", 00:22:35.276 "nguid": "B042E7E6D14A45D3A531FF71962648D3", 00:22:35.276 "uuid": "b042e7e6-d14a-45d3-a531-ff71962648d3", 00:22:35.276 "no_auto_visible": false 00:22:35.276 } 00:22:35.276 } 00:22:35.276 }, 00:22:35.276 { 00:22:35.276 "method": "nvmf_subsystem_add_listener", 00:22:35.276 "params": { 00:22:35.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:35.276 "listen_address": { 00:22:35.276 "trtype": "TCP", 00:22:35.276 "adrfam": "IPv4", 00:22:35.276 "traddr": "10.0.0.2", 00:22:35.276 "trsvcid": "4420" 00:22:35.276 }, 00:22:35.276 "secure_channel": false, 00:22:35.276 "sock_impl": "ssl" 00:22:35.276 } 00:22:35.276 } 00:22:35.276 ] 00:22:35.276 } 00:22:35.276 ] 00:22:35.276 }' 00:22:35.276 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:35.276 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:35.276 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:35.276 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2086773 00:22:35.276 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:35.276 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2086773 00:22:35.276 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2086773 ']' 00:22:35.276 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.276 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:35.276 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.276 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:35.276 20:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:35.276 [2024-07-24 20:16:39.038677] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:22:35.276 [2024-07-24 20:16:39.038790] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.535 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.535 [2024-07-24 20:16:39.151553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.795 [2024-07-24 20:16:39.348113] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.795 [2024-07-24 20:16:39.348212] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.795 [2024-07-24 20:16:39.348247] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.795 [2024-07-24 20:16:39.348276] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.795 [2024-07-24 20:16:39.348301] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.795 [2024-07-24 20:16:39.348485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.065 [2024-07-24 20:16:39.670549] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.065 [2024-07-24 20:16:39.711951] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:36.065 [2024-07-24 20:16:39.712446] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.065 20:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:36.065 20:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:36.065 20:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:36.065 20:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:36.065 20:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.065 20:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.065 20:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=2086805 00:22:36.065 20:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 2086805 /var/tmp/bdevperf.sock 00:22:36.065 20:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2086805 ']' 00:22:36.065 20:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.065 20:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:36.065 20:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:36.065 20:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:22:36.065 "subsystems": [ 00:22:36.065 { 00:22:36.065 "subsystem": "keyring", 00:22:36.065 "config": [ 00:22:36.065 { 00:22:36.065 "method": "keyring_file_add_key", 00:22:36.065 "params": { 00:22:36.065 "name": "key0", 00:22:36.065 "path": "/tmp/tmp.Pibd66V8Il" 00:22:36.065 } 00:22:36.065 } 00:22:36.065 ] 00:22:36.065 }, 00:22:36.065 { 00:22:36.065 "subsystem": "iobuf", 00:22:36.065 "config": [ 00:22:36.065 { 00:22:36.065 "method": "iobuf_set_options", 00:22:36.065 "params": { 00:22:36.065 "small_pool_count": 8192, 00:22:36.065 "large_pool_count": 1024, 00:22:36.065 "small_bufsize": 8192, 00:22:36.065 "large_bufsize": 135168 00:22:36.065 } 00:22:36.065 } 00:22:36.065 ] 00:22:36.065 }, 00:22:36.065 { 00:22:36.065 "subsystem": "sock", 00:22:36.065 "config": [ 00:22:36.065 { 00:22:36.065 "method": "sock_set_default_impl", 00:22:36.065 "params": { 00:22:36.065 "impl_name": "posix" 00:22:36.065 } 00:22:36.065 }, 00:22:36.065 { 00:22:36.065 "method": "sock_impl_set_options", 00:22:36.065 "params": { 00:22:36.065 "impl_name": "ssl", 00:22:36.065 "recv_buf_size": 4096, 00:22:36.065 "send_buf_size": 4096, 00:22:36.065 "enable_recv_pipe": true, 00:22:36.065 "enable_quickack": false, 00:22:36.065 "enable_placement_id": 0, 00:22:36.065 "enable_zerocopy_send_server": true, 00:22:36.065 "enable_zerocopy_send_client": false, 00:22:36.065 "zerocopy_threshold": 0, 00:22:36.065 "tls_version": 0, 00:22:36.065 "enable_ktls": false 00:22:36.065 } 00:22:36.065 }, 00:22:36.065 { 00:22:36.065 "method": "sock_impl_set_options", 00:22:36.065 "params": { 00:22:36.065 "impl_name": "posix", 00:22:36.065 "recv_buf_size": 2097152, 00:22:36.065 "send_buf_size": 2097152, 00:22:36.065 "enable_recv_pipe": true, 00:22:36.065 "enable_quickack": false, 00:22:36.065 "enable_placement_id": 0, 00:22:36.065 "enable_zerocopy_send_server": true, 00:22:36.065 "enable_zerocopy_send_client": false, 00:22:36.065 "zerocopy_threshold": 0, 00:22:36.065 "tls_version": 0, 00:22:36.065 "enable_ktls": false 00:22:36.065 } 00:22:36.065 } 00:22:36.065 ] 00:22:36.065 }, 00:22:36.065 { 00:22:36.065 "subsystem": "vmd", 00:22:36.065 "config": [] 00:22:36.065 }, 00:22:36.065 { 00:22:36.065 "subsystem": "accel", 00:22:36.065 "config": [ 00:22:36.065 { 00:22:36.065 "method": "accel_set_options", 00:22:36.065 "params": { 00:22:36.065 "small_cache_size": 128, 00:22:36.065 "large_cache_size": 16, 00:22:36.065 "task_count": 2048, 00:22:36.065 "sequence_count": 2048, 00:22:36.065 "buf_count": 2048 00:22:36.065 } 00:22:36.065 } 00:22:36.065 ] 00:22:36.065 }, 00:22:36.065 { 00:22:36.065 "subsystem": "bdev", 00:22:36.065 "config": [ 00:22:36.065 { 00:22:36.065 "method": "bdev_set_options", 00:22:36.065 "params": { 00:22:36.065 "bdev_io_pool_size": 65535, 00:22:36.065 "bdev_io_cache_size": 256, 00:22:36.065 "bdev_auto_examine": true, 00:22:36.065 "iobuf_small_cache_size": 128, 00:22:36.065 "iobuf_large_cache_size": 16 00:22:36.065 } 00:22:36.065 }, 00:22:36.065 { 00:22:36.065 "method": "bdev_raid_set_options", 00:22:36.065 "params": { 00:22:36.065 "process_window_size_kb": 1024, 00:22:36.065 "process_max_bandwidth_mb_sec": 0 00:22:36.065 } 00:22:36.065 }, 00:22:36.065 { 00:22:36.065 "method": "bdev_iscsi_set_options", 00:22:36.065 "params": { 00:22:36.065 "timeout_sec": 30 00:22:36.065 } 00:22:36.065 }, 00:22:36.065 { 00:22:36.065 "method": "bdev_nvme_set_options", 00:22:36.065 "params": { 00:22:36.065 "action_on_timeout": "none", 00:22:36.065 "timeout_us": 0, 00:22:36.065 "timeout_admin_us": 0, 00:22:36.065 "keep_alive_timeout_ms": 10000, 00:22:36.065 "arbitration_burst": 0, 00:22:36.065 "low_priority_weight": 0, 00:22:36.065 "medium_priority_weight": 0, 00:22:36.065 "high_priority_weight": 0, 00:22:36.065 "nvme_adminq_poll_period_us": 10000, 00:22:36.065 "nvme_ioq_poll_period_us": 0, 00:22:36.065 "io_queue_requests": 512, 00:22:36.065 "delay_cmd_submit": true, 00:22:36.065 "transport_retry_count": 4, 00:22:36.065 "bdev_retry_count": 3, 00:22:36.065 "transport_ack_timeout": 0, 00:22:36.065 "ctrlr_loss_timeout_sec": 0, 00:22:36.065 "reconnect_delay_sec": 0, 00:22:36.065 "fast_io_fail_timeout_sec": 0, 00:22:36.065 "disable_auto_failback": false, 00:22:36.065 "generate_uuids": false, 00:22:36.065 "transport_tos": 0, 00:22:36.065 "nvme_error_stat": false, 00:22:36.065 "rdma_srq_size": 0, 00:22:36.065 "io_path_stat": false, 00:22:36.065 "allow_accel_sequence": false, 00:22:36.065 "rdma_max_cq_size": 0, 00:22:36.065 "rdma_cm_event_timeout_ms": 0, 00:22:36.065 "dhchap_digests": [ 00:22:36.065 "sha256", 00:22:36.065 "sha384", 00:22:36.065 "sha512" 00:22:36.065 ], 00:22:36.065 "dhchap_dhgroups": [ 00:22:36.065 "null", 00:22:36.065 "ffdhe2048", 00:22:36.065 "ffdhe3072", 00:22:36.065 "ffdhe4096", 00:22:36.065 "ffdhe6144", 00:22:36.065 "ffdhe8192" 00:22:36.065 ] 00:22:36.065 } 00:22:36.065 }, 00:22:36.065 { 00:22:36.065 "method": "bdev_nvme_attach_controller", 00:22:36.065 "params": { 00:22:36.065 "name": "nvme0", 00:22:36.065 "trtype": "TCP", 00:22:36.065 "adrfam": "IPv4", 00:22:36.065 "traddr": "10.0.0.2", 00:22:36.065 "trsvcid": "4420", 00:22:36.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.065 "prchk_reftag": false, 00:22:36.065 "prchk_guard": false, 00:22:36.065 "ctrlr_loss_timeout_sec": 0, 00:22:36.065 "reconnect_delay_sec": 0, 00:22:36.065 "fast_io_fail_timeout_sec": 0, 00:22:36.065 "psk": "key0", 00:22:36.065 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:36.065 "hdgst": false, 00:22:36.065 "ddgst": false 00:22:36.065 } 00:22:36.065 }, 00:22:36.065 { 00:22:36.065 "method": "bdev_nvme_set_hotplug", 00:22:36.065 "params": { 00:22:36.065 "period_us": 100000, 00:22:36.065 "enable": false 00:22:36.065 } 00:22:36.065 }, 00:22:36.065 { 00:22:36.065 "method": "bdev_enable_histogram", 00:22:36.065 "params": { 00:22:36.065 "name": "nvme0n1", 00:22:36.065 "enable": true 00:22:36.065 } 00:22:36.065 }, 00:22:36.065 { 00:22:36.066 "method": "bdev_wait_for_examine" 00:22:36.066 } 00:22:36.066 ] 00:22:36.066 }, 00:22:36.066 { 00:22:36.066 "subsystem": "nbd", 00:22:36.066 "config": [] 00:22:36.066 } 00:22:36.066 ] 00:22:36.066 }' 00:22:36.066 20:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.066 20:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:36.066 20:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.340 [2024-07-24 20:16:39.850173] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:22:36.340 [2024-07-24 20:16:39.850271] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2086805 ] 00:22:36.340 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.340 [2024-07-24 20:16:39.943402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.340 [2024-07-24 20:16:40.086559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.598 [2024-07-24 20:16:40.286995] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:36.855 20:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:36.856 20:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:36.856 20:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:36.856 20:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:22:37.422 20:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.422 20:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:37.422 Running I/O for 1 seconds... 00:22:38.796 00:22:38.796 Latency(us) 00:22:38.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.796 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:38.796 Verification LBA range: start 0x0 length 0x2000 00:22:38.796 nvme0n1 : 1.03 2560.61 10.00 0.00 0.00 49271.95 8641.04 45826.65 00:22:38.796 =================================================================================================================== 00:22:38.796 Total : 2560.61 10.00 0.00 0.00 49271.95 8641.04 45826.65 00:22:38.796 0 00:22:38.796 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:22:38.796 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:22:38.796 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:38.796 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:22:38.796 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:22:38.796 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:38.796 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:38.796 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:38.796 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:38.796 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:38.796 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:38.796 nvmf_trace.0 00:22:38.797 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:22:38.797 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2086805 00:22:38.797 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2086805 ']' 00:22:38.797 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2086805 00:22:38.797 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:38.797 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:38.797 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2086805 00:22:38.797 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:38.797 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:38.797 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2086805' 00:22:38.797 killing process with pid 2086805 00:22:38.797 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2086805 00:22:38.797 Received shutdown signal, test time was about 1.000000 seconds 00:22:38.797 00:22:38.797 Latency(us) 00:22:38.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.797 =================================================================================================================== 00:22:38.797 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:38.797 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2086805 00:22:39.055 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:39.055 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:39.055 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:39.055 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:39.055 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:39.055 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:39.055 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:39.055 rmmod nvme_tcp 00:22:39.055 rmmod nvme_fabrics 00:22:39.055 rmmod nvme_keyring 00:22:39.055 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:39.055 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:39.055 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:39.055 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2086773 ']' 00:22:39.055 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2086773 00:22:39.055 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2086773 ']' 00:22:39.055 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2086773 00:22:39.055 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:39.055 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:39.055 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2086773 00:22:39.055 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:39.055 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:39.055 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2086773' 00:22:39.055 killing process with pid 2086773 00:22:39.055 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2086773 00:22:39.055 20:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2086773 00:22:39.311 20:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:39.311 20:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:39.311 20:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:39.311 20:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:39.311 20:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:39.311 20:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.311 20:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.311 20:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.NORksAbXIw /tmp/tmp.sqp8Xck1BI /tmp/tmp.Pibd66V8Il 00:22:41.843 00:22:41.843 real 1m33.132s 00:22:41.843 user 2m35.248s 00:22:41.843 sys 0m28.815s 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.843 ************************************ 00:22:41.843 END TEST nvmf_tls 00:22:41.843 ************************************ 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:41.843 ************************************ 00:22:41.843 START TEST nvmf_fips 00:22:41.843 ************************************ 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:41.843 * Looking for test storage... 00:22:41.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.843 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:22:41.844 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:41.845 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:22:41.845 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:41.845 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:22:41.845 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:41.845 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:22:41.845 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:22:41.845 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:22:41.845 Error setting digest 00:22:41.845 0092E31E837F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:41.845 0092E31E837F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:41.845 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:22:41.845 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:41.845 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:41.845 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:41.845 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:41.845 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:41.845 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.845 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:41.845 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:41.845 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:41.845 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.845 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.845 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.845 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:41.845 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:41.845 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:41.845 20:16:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:44.375 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.375 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:44.375 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:44.375 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:44.375 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:44.375 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:44.375 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:44.375 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:44.375 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:44.375 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:44.375 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:44.375 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:44.375 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:44.376 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:44.376 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:44.376 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.376 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.376 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.376 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.376 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.376 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.376 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.376 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.376 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.376 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:44.636 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:44.636 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:44.636 Found net devices under 0000:84:00.0: cvl_0_0 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:44.636 Found net devices under 0000:84:00.1: cvl_0_1 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:44.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:22:44.636 00:22:44.636 --- 10.0.0.2 ping statistics --- 00:22:44.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.636 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:22:44.636 00:22:44.636 --- 10.0.0.1 ping statistics --- 00:22:44.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.636 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2089298 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2089298 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2089298 ']' 00:22:44.636 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.637 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:44.637 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.637 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:44.637 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:44.896 [2024-07-24 20:16:48.486784] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:22:44.896 [2024-07-24 20:16:48.486894] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.896 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.896 [2024-07-24 20:16:48.576399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.155 [2024-07-24 20:16:48.713800] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.155 [2024-07-24 20:16:48.713874] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.155 [2024-07-24 20:16:48.713895] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.155 [2024-07-24 20:16:48.713912] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.155 [2024-07-24 20:16:48.713927] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.155 [2024-07-24 20:16:48.713964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.155 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:45.155 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:45.155 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:45.155 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:45.155 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:45.155 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.155 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:45.155 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:45.155 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:45.155 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:45.155 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:45.155 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:45.155 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:45.155 20:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:45.413 [2024-07-24 20:16:49.197634] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.672 [2024-07-24 20:16:49.213594] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:45.672 [2024-07-24 20:16:49.213883] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.672 [2024-07-24 20:16:49.247160] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:45.672 malloc0 00:22:45.672 20:16:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:45.672 20:16:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2089446 00:22:45.672 20:16:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:45.672 20:16:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2089446 /var/tmp/bdevperf.sock 00:22:45.672 20:16:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2089446 ']' 00:22:45.672 20:16:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.672 20:16:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:45.672 20:16:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.672 20:16:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:45.672 20:16:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:45.672 [2024-07-24 20:16:49.353346] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:22:45.672 [2024-07-24 20:16:49.353446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2089446 ] 00:22:45.672 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.672 [2024-07-24 20:16:49.430095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.931 [2024-07-24 20:16:49.575002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.931 20:16:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:45.931 20:16:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:45.931 20:16:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:46.497 [2024-07-24 20:16:50.146022] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:46.497 [2024-07-24 20:16:50.146213] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:46.497 TLSTESTn1 00:22:46.497 20:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:46.755 Running I/O for 10 seconds... 00:22:56.725 00:22:56.725 Latency(us) 00:22:56.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.725 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:56.725 Verification LBA range: start 0x0 length 0x2000 00:22:56.725 TLSTESTn1 : 10.04 2482.84 9.70 0.00 0.00 51439.88 9709.04 45049.93 00:22:56.725 =================================================================================================================== 00:22:56.725 Total : 2482.84 9.70 0.00 0.00 51439.88 9709.04 45049.93 00:22:56.725 0 00:22:56.725 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:56.725 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:56.725 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:22:56.725 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:22:56.725 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:56.725 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:56.725 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:56.725 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:56.725 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:56.725 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:56.725 nvmf_trace.0 00:22:56.983 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:22:56.983 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2089446 00:22:56.983 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2089446 ']' 00:22:56.983 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2089446 00:22:56.983 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:56.983 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:56.983 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2089446 00:22:56.983 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:56.983 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:56.983 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2089446' 00:22:56.983 killing process with pid 2089446 00:22:56.983 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2089446 00:22:56.983 Received shutdown signal, test time was about 10.000000 seconds 00:22:56.983 00:22:56.983 Latency(us) 00:22:56.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.983 =================================================================================================================== 00:22:56.983 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:56.983 [2024-07-24 20:17:00.556845] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:56.983 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2089446 00:22:57.240 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:57.240 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:57.240 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:57.240 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:57.240 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:57.240 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:57.240 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:57.240 rmmod nvme_tcp 00:22:57.240 rmmod nvme_fabrics 00:22:57.240 rmmod nvme_keyring 00:22:57.240 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:57.240 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:57.240 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:57.240 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2089298 ']' 00:22:57.240 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2089298 00:22:57.240 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2089298 ']' 00:22:57.240 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2089298 00:22:57.240 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:57.240 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:57.240 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2089298 00:22:57.240 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:57.240 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:57.240 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2089298' 00:22:57.240 killing process with pid 2089298 00:22:57.240 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2089298 00:22:57.241 [2024-07-24 20:17:00.988669] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:57.241 20:17:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2089298 00:22:57.807 20:17:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:57.807 20:17:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:57.807 20:17:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:57.807 20:17:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:57.807 20:17:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:57.807 20:17:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.807 20:17:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.807 20:17:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.732 20:17:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:59.732 20:17:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:59.732 00:22:59.732 real 0m18.238s 00:22:59.732 user 0m22.867s 00:22:59.732 sys 0m6.660s 00:22:59.732 20:17:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:59.732 20:17:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:59.732 ************************************ 00:22:59.732 END TEST nvmf_fips 00:22:59.732 ************************************ 00:22:59.732 20:17:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:22:59.732 20:17:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:22:59.732 20:17:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:22:59.732 20:17:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:22:59.732 20:17:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:22:59.732 20:17:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:03.021 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:03.021 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:23:03.021 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:03.021 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:03.021 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:03.021 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:03.021 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:03.021 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:23:03.021 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:03.021 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:23:03.021 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:23:03.021 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:23:03.021 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:23:03.021 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:23:03.021 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:23:03.021 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:03.021 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:03.021 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:03.021 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:03.021 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:03.021 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:03.022 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:03.022 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:03.022 Found net devices under 0000:84:00.0: cvl_0_0 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:03.022 Found net devices under 0000:84:00.1: cvl_0_1 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:03.022 ************************************ 00:23:03.022 START TEST nvmf_perf_adq 00:23:03.022 ************************************ 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:03.022 * Looking for test storage... 00:23:03.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:03.022 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:03.023 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:03.023 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:03.023 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:03.023 20:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:05.557 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:05.557 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:05.557 Found net devices under 0000:84:00.0: cvl_0_0 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.557 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:05.557 Found net devices under 0000:84:00.1: cvl_0_1 00:23:05.558 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.558 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:05.558 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:05.558 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:05.558 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:05.558 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:23:05.558 20:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:05.817 20:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:08.353 20:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:13.627 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:23:13.627 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:13.627 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.627 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:13.627 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:13.627 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:13.627 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.627 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.627 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.627 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:13.627 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:13.627 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:13.627 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:13.627 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.627 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:13.627 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:13.627 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:13.627 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:13.628 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:13.628 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:13.628 Found net devices under 0000:84:00.0: cvl_0_0 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:13.628 Found net devices under 0000:84:00.1: cvl_0_1 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:13.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:23:13.628 00:23:13.628 --- 10.0.0.2 ping statistics --- 00:23:13.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.628 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:13.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:23:13.628 00:23:13.628 --- 10.0.0.1 ping statistics --- 00:23:13.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.628 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:23:13.628 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.629 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:13.629 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:13.629 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.629 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:13.629 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:13.629 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.629 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:13.629 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:13.629 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:13.629 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:13.629 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:13.629 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:13.629 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2095344 00:23:13.629 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:13.629 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2095344 00:23:13.629 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2095344 ']' 00:23:13.629 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.629 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:13.629 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.629 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:13.629 20:17:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:13.629 [2024-07-24 20:17:16.761848] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:23:13.629 [2024-07-24 20:17:16.761951] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.629 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.629 [2024-07-24 20:17:16.875510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:13.629 [2024-07-24 20:17:17.075125] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.629 [2024-07-24 20:17:17.075228] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.629 [2024-07-24 20:17:17.075263] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.629 [2024-07-24 20:17:17.075304] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.629 [2024-07-24 20:17:17.075330] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.629 [2024-07-24 20:17:17.075469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.629 [2024-07-24 20:17:17.075505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.629 [2024-07-24 20:17:17.075565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:13.629 [2024-07-24 20:17:17.075569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:14.562 [2024-07-24 20:17:18.311954] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.562 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:14.820 Malloc1 00:23:14.820 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.821 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:14.821 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.821 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:14.821 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.821 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:14.821 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.821 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:14.821 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.821 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:14.821 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.821 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:14.821 [2024-07-24 20:17:18.367743] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.821 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.821 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2095631 00:23:14.821 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:14.821 20:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:23:14.821 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.721 20:17:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:23:16.721 20:17:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.721 20:17:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.721 20:17:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.721 20:17:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:23:16.721 "tick_rate": 2700000000, 00:23:16.721 "poll_groups": [ 00:23:16.721 { 00:23:16.721 "name": "nvmf_tgt_poll_group_000", 00:23:16.721 "admin_qpairs": 1, 00:23:16.721 "io_qpairs": 1, 00:23:16.721 "current_admin_qpairs": 1, 00:23:16.721 "current_io_qpairs": 1, 00:23:16.721 "pending_bdev_io": 0, 00:23:16.721 "completed_nvme_io": 15848, 00:23:16.721 "transports": [ 00:23:16.721 { 00:23:16.721 "trtype": "TCP" 00:23:16.721 } 00:23:16.721 ] 00:23:16.721 }, 00:23:16.721 { 00:23:16.721 "name": "nvmf_tgt_poll_group_001", 00:23:16.721 "admin_qpairs": 0, 00:23:16.721 "io_qpairs": 1, 00:23:16.721 "current_admin_qpairs": 0, 00:23:16.721 "current_io_qpairs": 1, 00:23:16.721 "pending_bdev_io": 0, 00:23:16.721 "completed_nvme_io": 16001, 00:23:16.721 "transports": [ 00:23:16.721 { 00:23:16.721 "trtype": "TCP" 00:23:16.721 } 00:23:16.721 ] 00:23:16.721 }, 00:23:16.721 { 00:23:16.721 "name": "nvmf_tgt_poll_group_002", 00:23:16.721 "admin_qpairs": 0, 00:23:16.722 "io_qpairs": 1, 00:23:16.722 "current_admin_qpairs": 0, 00:23:16.722 "current_io_qpairs": 1, 00:23:16.722 "pending_bdev_io": 0, 00:23:16.722 "completed_nvme_io": 16153, 00:23:16.722 "transports": [ 00:23:16.722 { 00:23:16.722 "trtype": "TCP" 00:23:16.722 } 00:23:16.722 ] 00:23:16.722 }, 00:23:16.722 { 00:23:16.722 "name": "nvmf_tgt_poll_group_003", 00:23:16.722 "admin_qpairs": 0, 00:23:16.722 "io_qpairs": 1, 00:23:16.722 "current_admin_qpairs": 0, 00:23:16.722 "current_io_qpairs": 1, 00:23:16.722 "pending_bdev_io": 0, 00:23:16.722 "completed_nvme_io": 15498, 00:23:16.722 "transports": [ 00:23:16.722 { 00:23:16.722 "trtype": "TCP" 00:23:16.722 } 00:23:16.722 ] 00:23:16.722 } 00:23:16.722 ] 00:23:16.722 }' 00:23:16.722 20:17:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:16.722 20:17:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:23:16.722 20:17:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:23:16.722 20:17:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:23:16.722 20:17:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2095631 00:23:24.876 Initializing NVMe Controllers 00:23:24.876 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:24.876 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:24.876 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:24.876 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:24.876 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:24.876 Initialization complete. Launching workers. 00:23:24.876 ======================================================== 00:23:24.876 Latency(us) 00:23:24.876 Device Information : IOPS MiB/s Average min max 00:23:24.876 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8171.17 31.92 7835.46 2924.55 12021.83 00:23:24.876 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8408.27 32.84 7614.30 4931.58 10548.57 00:23:24.876 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8519.17 33.28 7513.70 3170.97 10898.97 00:23:24.876 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8361.77 32.66 7654.57 3486.65 11951.91 00:23:24.876 ======================================================== 00:23:24.876 Total : 33460.40 130.70 7652.76 2924.55 12021.83 00:23:24.876 00:23:24.876 20:17:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:23:24.876 20:17:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:24.876 20:17:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:24.876 20:17:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:24.876 20:17:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:24.876 20:17:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:24.876 20:17:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:24.876 rmmod nvme_tcp 00:23:24.876 rmmod nvme_fabrics 00:23:24.876 rmmod nvme_keyring 00:23:24.876 20:17:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:24.876 20:17:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:24.876 20:17:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:24.876 20:17:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2095344 ']' 00:23:24.876 20:17:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2095344 00:23:24.876 20:17:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2095344 ']' 00:23:24.876 20:17:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2095344 00:23:24.876 20:17:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:23:24.876 20:17:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:24.876 20:17:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2095344 00:23:24.876 20:17:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:24.876 20:17:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:24.876 20:17:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2095344' 00:23:24.876 killing process with pid 2095344 00:23:24.876 20:17:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2095344 00:23:24.876 20:17:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2095344 00:23:25.445 20:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:25.445 20:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:25.445 20:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:25.445 20:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:25.445 20:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:25.445 20:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.445 20:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.445 20:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.348 20:17:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:27.348 20:17:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:23:27.348 20:17:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:28.281 20:17:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:30.184 20:17:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:35.459 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:35.459 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:35.460 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:35.460 Found net devices under 0000:84:00.0: cvl_0_0 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:35.460 Found net devices under 0000:84:00.1: cvl_0_1 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:35.460 20:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:35.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:23:35.460 00:23:35.460 --- 10.0.0.2 ping statistics --- 00:23:35.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.460 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:35.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:23:35.460 00:23:35.460 --- 10.0.0.1 ping statistics --- 00:23:35.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.460 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:35.460 net.core.busy_poll = 1 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:35.460 net.core.busy_read = 1 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:35.460 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:35.719 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2098237 00:23:35.719 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:35.719 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2098237 00:23:35.719 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2098237 ']' 00:23:35.719 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.719 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:35.719 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.719 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:35.719 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:35.719 [2024-07-24 20:17:39.313828] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:23:35.719 [2024-07-24 20:17:39.313938] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.719 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.719 [2024-07-24 20:17:39.411864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:35.978 [2024-07-24 20:17:39.566063] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.978 [2024-07-24 20:17:39.566134] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.978 [2024-07-24 20:17:39.566155] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.978 [2024-07-24 20:17:39.566172] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.978 [2024-07-24 20:17:39.566188] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.978 [2024-07-24 20:17:39.566315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.978 [2024-07-24 20:17:39.566375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.978 [2024-07-24 20:17:39.566442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:35.978 [2024-07-24 20:17:39.566453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.978 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:35.978 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:23:35.978 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:35.978 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:35.978 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:35.978 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.978 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:23:35.978 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:35.978 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:35.978 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.978 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:35.978 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.978 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:35.978 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:35.978 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.978 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:35.978 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.978 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:35.978 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.978 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:36.237 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.237 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:36.237 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.237 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:36.237 [2024-07-24 20:17:39.836407] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.237 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.237 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:36.237 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.237 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:36.237 Malloc1 00:23:36.237 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.237 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:36.237 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.237 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:36.237 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.237 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:36.237 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.237 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:36.237 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.237 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:36.237 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.237 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:36.237 [2024-07-24 20:17:39.894392] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.237 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.237 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2098274 00:23:36.237 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:23:36.237 20:17:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:36.237 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.135 20:17:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:23:38.135 20:17:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.135 20:17:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:38.135 20:17:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.135 20:17:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:23:38.135 "tick_rate": 2700000000, 00:23:38.135 "poll_groups": [ 00:23:38.135 { 00:23:38.135 "name": "nvmf_tgt_poll_group_000", 00:23:38.135 "admin_qpairs": 1, 00:23:38.135 "io_qpairs": 3, 00:23:38.135 "current_admin_qpairs": 1, 00:23:38.135 "current_io_qpairs": 3, 00:23:38.136 "pending_bdev_io": 0, 00:23:38.136 "completed_nvme_io": 20209, 00:23:38.136 "transports": [ 00:23:38.136 { 00:23:38.136 "trtype": "TCP" 00:23:38.136 } 00:23:38.136 ] 00:23:38.136 }, 00:23:38.136 { 00:23:38.136 "name": "nvmf_tgt_poll_group_001", 00:23:38.136 "admin_qpairs": 0, 00:23:38.136 "io_qpairs": 1, 00:23:38.136 "current_admin_qpairs": 0, 00:23:38.136 "current_io_qpairs": 1, 00:23:38.136 "pending_bdev_io": 0, 00:23:38.136 "completed_nvme_io": 19193, 00:23:38.136 "transports": [ 00:23:38.136 { 00:23:38.136 "trtype": "TCP" 00:23:38.136 } 00:23:38.136 ] 00:23:38.136 }, 00:23:38.136 { 00:23:38.136 "name": "nvmf_tgt_poll_group_002", 00:23:38.136 "admin_qpairs": 0, 00:23:38.136 "io_qpairs": 0, 00:23:38.136 "current_admin_qpairs": 0, 00:23:38.136 "current_io_qpairs": 0, 00:23:38.136 "pending_bdev_io": 0, 00:23:38.136 "completed_nvme_io": 0, 00:23:38.136 "transports": [ 00:23:38.136 { 00:23:38.136 "trtype": "TCP" 00:23:38.136 } 00:23:38.136 ] 00:23:38.136 }, 00:23:38.136 { 00:23:38.136 "name": "nvmf_tgt_poll_group_003", 00:23:38.136 "admin_qpairs": 0, 00:23:38.136 "io_qpairs": 0, 00:23:38.136 "current_admin_qpairs": 0, 00:23:38.136 "current_io_qpairs": 0, 00:23:38.136 "pending_bdev_io": 0, 00:23:38.136 "completed_nvme_io": 0, 00:23:38.136 "transports": [ 00:23:38.136 { 00:23:38.136 "trtype": "TCP" 00:23:38.136 } 00:23:38.136 ] 00:23:38.136 } 00:23:38.136 ] 00:23:38.136 }' 00:23:38.393 20:17:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:38.393 20:17:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:23:38.393 20:17:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:23:38.393 20:17:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:23:38.393 20:17:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2098274 00:23:46.506 Initializing NVMe Controllers 00:23:46.506 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:46.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:46.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:46.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:46.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:46.506 Initialization complete. Launching workers. 00:23:46.506 ======================================================== 00:23:46.506 Latency(us) 00:23:46.506 Device Information : IOPS MiB/s Average min max 00:23:46.506 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 3570.50 13.95 17968.27 2926.29 68447.00 00:23:46.506 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 3692.10 14.42 17396.10 2561.48 67953.71 00:23:46.506 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 3610.10 14.10 17788.92 2708.00 65232.65 00:23:46.506 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10202.90 39.86 6290.16 1538.82 46107.15 00:23:46.506 ======================================================== 00:23:46.506 Total : 21075.60 82.33 12183.83 1538.82 68447.00 00:23:46.506 00:23:46.506 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:23:46.506 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:46.506 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:46.506 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:46.506 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:46.506 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:46.506 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:46.506 rmmod nvme_tcp 00:23:46.506 rmmod nvme_fabrics 00:23:46.506 rmmod nvme_keyring 00:23:46.506 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:46.506 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:46.506 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:46.506 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2098237 ']' 00:23:46.506 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2098237 00:23:46.506 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2098237 ']' 00:23:46.506 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2098237 00:23:46.506 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:23:46.506 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:46.506 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2098237 00:23:46.506 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:46.506 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:46.506 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2098237' 00:23:46.506 killing process with pid 2098237 00:23:46.506 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2098237 00:23:46.506 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2098237 00:23:47.106 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:47.106 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:47.106 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:47.106 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:47.106 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:47.106 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.106 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.106 20:17:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:50.394 00:23:50.394 real 0m47.569s 00:23:50.394 user 2m46.124s 00:23:50.394 sys 0m10.447s 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:50.394 ************************************ 00:23:50.394 END TEST nvmf_perf_adq 00:23:50.394 ************************************ 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:50.394 ************************************ 00:23:50.394 START TEST nvmf_shutdown 00:23:50.394 ************************************ 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:50.394 * Looking for test storage... 00:23:50.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:50.394 ************************************ 00:23:50.394 START TEST nvmf_shutdown_tc1 00:23:50.394 ************************************ 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:23:50.394 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:23:50.395 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:50.395 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:50.395 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.395 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:50.395 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:50.395 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:50.395 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.395 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.395 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.395 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:50.395 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:50.395 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:50.395 20:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:52.927 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.927 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:52.927 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:52.927 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:52.927 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:52.927 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:52.927 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:52.927 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:52.927 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:52.927 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:52.927 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:52.927 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:52.927 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:52.927 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:52.927 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:52.927 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.927 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.927 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.927 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:52.928 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:52.928 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:52.928 Found net devices under 0000:84:00.0: cvl_0_0 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:52.928 Found net devices under 0000:84:00.1: cvl_0_1 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:52.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:23:52.928 00:23:52.928 --- 10.0.0.2 ping statistics --- 00:23:52.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.928 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:52.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:23:52.928 00:23:52.928 --- 10.0.0.1 ping statistics --- 00:23:52.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.928 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:52.928 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2101679 00:23:52.929 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:52.929 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2101679 00:23:52.929 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2101679 ']' 00:23:52.929 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.929 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:52.929 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.929 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:52.929 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:52.929 [2024-07-24 20:17:56.586908] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:23:52.929 [2024-07-24 20:17:56.587009] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.929 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.929 [2024-07-24 20:17:56.681042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:53.188 [2024-07-24 20:17:56.827290] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.188 [2024-07-24 20:17:56.827366] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.188 [2024-07-24 20:17:56.827387] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.188 [2024-07-24 20:17:56.827404] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.188 [2024-07-24 20:17:56.827418] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.188 [2024-07-24 20:17:56.827544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.188 [2024-07-24 20:17:56.827608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:53.188 [2024-07-24 20:17:56.827666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:53.188 [2024-07-24 20:17:56.827669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.447 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:53.447 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:23:53.447 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:53.447 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:53.447 20:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:53.447 [2024-07-24 20:17:57.009848] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.447 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:53.447 Malloc1 00:23:53.447 [2024-07-24 20:17:57.104422] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.447 Malloc2 00:23:53.447 Malloc3 00:23:53.705 Malloc4 00:23:53.705 Malloc5 00:23:53.705 Malloc6 00:23:53.705 Malloc7 00:23:53.705 Malloc8 00:23:53.964 Malloc9 00:23:53.964 Malloc10 00:23:53.964 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.964 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:53.964 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:53.964 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:53.964 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2101808 00:23:53.964 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2101808 /var/tmp/bdevperf.sock 00:23:53.964 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2101808 ']' 00:23:53.964 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:53.964 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:53.964 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:53.964 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:53.964 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:53.964 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:53.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:53.964 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:53.964 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:53.964 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.964 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:53.964 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.964 { 00:23:53.964 "params": { 00:23:53.964 "name": "Nvme$subsystem", 00:23:53.964 "trtype": "$TEST_TRANSPORT", 00:23:53.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.965 "adrfam": "ipv4", 00:23:53.965 "trsvcid": "$NVMF_PORT", 00:23:53.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.965 "hdgst": ${hdgst:-false}, 00:23:53.965 "ddgst": ${ddgst:-false} 00:23:53.965 }, 00:23:53.965 "method": "bdev_nvme_attach_controller" 00:23:53.965 } 00:23:53.965 EOF 00:23:53.965 )") 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.965 { 00:23:53.965 "params": { 00:23:53.965 "name": "Nvme$subsystem", 00:23:53.965 "trtype": "$TEST_TRANSPORT", 00:23:53.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.965 "adrfam": "ipv4", 00:23:53.965 "trsvcid": "$NVMF_PORT", 00:23:53.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.965 "hdgst": ${hdgst:-false}, 00:23:53.965 "ddgst": ${ddgst:-false} 00:23:53.965 }, 00:23:53.965 "method": "bdev_nvme_attach_controller" 00:23:53.965 } 00:23:53.965 EOF 00:23:53.965 )") 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.965 { 00:23:53.965 "params": { 00:23:53.965 "name": "Nvme$subsystem", 00:23:53.965 "trtype": "$TEST_TRANSPORT", 00:23:53.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.965 "adrfam": "ipv4", 00:23:53.965 "trsvcid": "$NVMF_PORT", 00:23:53.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.965 "hdgst": ${hdgst:-false}, 00:23:53.965 "ddgst": ${ddgst:-false} 00:23:53.965 }, 00:23:53.965 "method": "bdev_nvme_attach_controller" 00:23:53.965 } 00:23:53.965 EOF 00:23:53.965 )") 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.965 { 00:23:53.965 "params": { 00:23:53.965 "name": "Nvme$subsystem", 00:23:53.965 "trtype": "$TEST_TRANSPORT", 00:23:53.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.965 "adrfam": "ipv4", 00:23:53.965 "trsvcid": "$NVMF_PORT", 00:23:53.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.965 "hdgst": ${hdgst:-false}, 00:23:53.965 "ddgst": ${ddgst:-false} 00:23:53.965 }, 00:23:53.965 "method": "bdev_nvme_attach_controller" 00:23:53.965 } 00:23:53.965 EOF 00:23:53.965 )") 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.965 { 00:23:53.965 "params": { 00:23:53.965 "name": "Nvme$subsystem", 00:23:53.965 "trtype": "$TEST_TRANSPORT", 00:23:53.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.965 "adrfam": "ipv4", 00:23:53.965 "trsvcid": "$NVMF_PORT", 00:23:53.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.965 "hdgst": ${hdgst:-false}, 00:23:53.965 "ddgst": ${ddgst:-false} 00:23:53.965 }, 00:23:53.965 "method": "bdev_nvme_attach_controller" 00:23:53.965 } 00:23:53.965 EOF 00:23:53.965 )") 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.965 { 00:23:53.965 "params": { 00:23:53.965 "name": "Nvme$subsystem", 00:23:53.965 "trtype": "$TEST_TRANSPORT", 00:23:53.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.965 "adrfam": "ipv4", 00:23:53.965 "trsvcid": "$NVMF_PORT", 00:23:53.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.965 "hdgst": ${hdgst:-false}, 00:23:53.965 "ddgst": ${ddgst:-false} 00:23:53.965 }, 00:23:53.965 "method": "bdev_nvme_attach_controller" 00:23:53.965 } 00:23:53.965 EOF 00:23:53.965 )") 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.965 { 00:23:53.965 "params": { 00:23:53.965 "name": "Nvme$subsystem", 00:23:53.965 "trtype": "$TEST_TRANSPORT", 00:23:53.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.965 "adrfam": "ipv4", 00:23:53.965 "trsvcid": "$NVMF_PORT", 00:23:53.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.965 "hdgst": ${hdgst:-false}, 00:23:53.965 "ddgst": ${ddgst:-false} 00:23:53.965 }, 00:23:53.965 "method": "bdev_nvme_attach_controller" 00:23:53.965 } 00:23:53.965 EOF 00:23:53.965 )") 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.965 { 00:23:53.965 "params": { 00:23:53.965 "name": "Nvme$subsystem", 00:23:53.965 "trtype": "$TEST_TRANSPORT", 00:23:53.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.965 "adrfam": "ipv4", 00:23:53.965 "trsvcid": "$NVMF_PORT", 00:23:53.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.965 "hdgst": ${hdgst:-false}, 00:23:53.965 "ddgst": ${ddgst:-false} 00:23:53.965 }, 00:23:53.965 "method": "bdev_nvme_attach_controller" 00:23:53.965 } 00:23:53.965 EOF 00:23:53.965 )") 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.965 { 00:23:53.965 "params": { 00:23:53.965 "name": "Nvme$subsystem", 00:23:53.965 "trtype": "$TEST_TRANSPORT", 00:23:53.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.965 "adrfam": "ipv4", 00:23:53.965 "trsvcid": "$NVMF_PORT", 00:23:53.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.965 "hdgst": ${hdgst:-false}, 00:23:53.965 "ddgst": ${ddgst:-false} 00:23:53.965 }, 00:23:53.965 "method": "bdev_nvme_attach_controller" 00:23:53.965 } 00:23:53.965 EOF 00:23:53.965 )") 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.965 { 00:23:53.965 "params": { 00:23:53.965 "name": "Nvme$subsystem", 00:23:53.965 "trtype": "$TEST_TRANSPORT", 00:23:53.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.965 "adrfam": "ipv4", 00:23:53.965 "trsvcid": "$NVMF_PORT", 00:23:53.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.965 "hdgst": ${hdgst:-false}, 00:23:53.965 "ddgst": ${ddgst:-false} 00:23:53.965 }, 00:23:53.965 "method": "bdev_nvme_attach_controller" 00:23:53.965 } 00:23:53.965 EOF 00:23:53.965 )") 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:53.965 20:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:53.965 "params": { 00:23:53.965 "name": "Nvme1", 00:23:53.965 "trtype": "tcp", 00:23:53.965 "traddr": "10.0.0.2", 00:23:53.965 "adrfam": "ipv4", 00:23:53.965 "trsvcid": "4420", 00:23:53.965 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.965 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:53.965 "hdgst": false, 00:23:53.965 "ddgst": false 00:23:53.965 }, 00:23:53.965 "method": "bdev_nvme_attach_controller" 00:23:53.965 },{ 00:23:53.965 "params": { 00:23:53.966 "name": "Nvme2", 00:23:53.966 "trtype": "tcp", 00:23:53.966 "traddr": "10.0.0.2", 00:23:53.966 "adrfam": "ipv4", 00:23:53.966 "trsvcid": "4420", 00:23:53.966 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:53.966 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:53.966 "hdgst": false, 00:23:53.966 "ddgst": false 00:23:53.966 }, 00:23:53.966 "method": "bdev_nvme_attach_controller" 00:23:53.966 },{ 00:23:53.966 "params": { 00:23:53.966 "name": "Nvme3", 00:23:53.966 "trtype": "tcp", 00:23:53.966 "traddr": "10.0.0.2", 00:23:53.966 "adrfam": "ipv4", 00:23:53.966 "trsvcid": "4420", 00:23:53.966 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:53.966 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:53.966 "hdgst": false, 00:23:53.966 "ddgst": false 00:23:53.966 }, 00:23:53.966 "method": "bdev_nvme_attach_controller" 00:23:53.966 },{ 00:23:53.966 "params": { 00:23:53.966 "name": "Nvme4", 00:23:53.966 "trtype": "tcp", 00:23:53.966 "traddr": "10.0.0.2", 00:23:53.966 "adrfam": "ipv4", 00:23:53.966 "trsvcid": "4420", 00:23:53.966 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:53.966 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:53.966 "hdgst": false, 00:23:53.966 "ddgst": false 00:23:53.966 }, 00:23:53.966 "method": "bdev_nvme_attach_controller" 00:23:53.966 },{ 00:23:53.966 "params": { 00:23:53.966 "name": "Nvme5", 00:23:53.966 "trtype": "tcp", 00:23:53.966 "traddr": "10.0.0.2", 00:23:53.966 "adrfam": "ipv4", 00:23:53.966 "trsvcid": "4420", 00:23:53.966 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:53.966 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:53.966 "hdgst": false, 00:23:53.966 "ddgst": false 00:23:53.966 }, 00:23:53.966 "method": "bdev_nvme_attach_controller" 00:23:53.966 },{ 00:23:53.966 "params": { 00:23:53.966 "name": "Nvme6", 00:23:53.966 "trtype": "tcp", 00:23:53.966 "traddr": "10.0.0.2", 00:23:53.966 "adrfam": "ipv4", 00:23:53.966 "trsvcid": "4420", 00:23:53.966 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:53.966 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:53.966 "hdgst": false, 00:23:53.966 "ddgst": false 00:23:53.966 }, 00:23:53.966 "method": "bdev_nvme_attach_controller" 00:23:53.966 },{ 00:23:53.966 "params": { 00:23:53.966 "name": "Nvme7", 00:23:53.966 "trtype": "tcp", 00:23:53.966 "traddr": "10.0.0.2", 00:23:53.966 "adrfam": "ipv4", 00:23:53.966 "trsvcid": "4420", 00:23:53.966 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:53.966 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:53.966 "hdgst": false, 00:23:53.966 "ddgst": false 00:23:53.966 }, 00:23:53.966 "method": "bdev_nvme_attach_controller" 00:23:53.966 },{ 00:23:53.966 "params": { 00:23:53.966 "name": "Nvme8", 00:23:53.966 "trtype": "tcp", 00:23:53.966 "traddr": "10.0.0.2", 00:23:53.966 "adrfam": "ipv4", 00:23:53.966 "trsvcid": "4420", 00:23:53.966 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:53.966 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:53.966 "hdgst": false, 00:23:53.966 "ddgst": false 00:23:53.966 }, 00:23:53.966 "method": "bdev_nvme_attach_controller" 00:23:53.966 },{ 00:23:53.966 "params": { 00:23:53.966 "name": "Nvme9", 00:23:53.966 "trtype": "tcp", 00:23:53.966 "traddr": "10.0.0.2", 00:23:53.966 "adrfam": "ipv4", 00:23:53.966 "trsvcid": "4420", 00:23:53.966 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:53.966 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:53.966 "hdgst": false, 00:23:53.966 "ddgst": false 00:23:53.966 }, 00:23:53.966 "method": "bdev_nvme_attach_controller" 00:23:53.966 },{ 00:23:53.966 "params": { 00:23:53.966 "name": "Nvme10", 00:23:53.966 "trtype": "tcp", 00:23:53.966 "traddr": "10.0.0.2", 00:23:53.966 "adrfam": "ipv4", 00:23:53.966 "trsvcid": "4420", 00:23:53.966 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:53.966 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:53.966 "hdgst": false, 00:23:53.966 "ddgst": false 00:23:53.966 }, 00:23:53.966 "method": "bdev_nvme_attach_controller" 00:23:53.966 }' 00:23:53.966 [2024-07-24 20:17:57.655851] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:23:53.966 [2024-07-24 20:17:57.655936] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:53.966 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.966 [2024-07-24 20:17:57.732110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.224 [2024-07-24 20:17:57.871495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.598 20:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:55.598 20:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:23:55.598 20:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:55.598 20:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.598 20:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:55.598 20:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.598 20:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2101808 00:23:55.598 20:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:55.598 20:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:56.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2101808 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2101679 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.532 { 00:23:56.532 "params": { 00:23:56.532 "name": "Nvme$subsystem", 00:23:56.532 "trtype": "$TEST_TRANSPORT", 00:23:56.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.532 "adrfam": "ipv4", 00:23:56.532 "trsvcid": "$NVMF_PORT", 00:23:56.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.532 "hdgst": ${hdgst:-false}, 00:23:56.532 "ddgst": ${ddgst:-false} 00:23:56.532 }, 00:23:56.532 "method": "bdev_nvme_attach_controller" 00:23:56.532 } 00:23:56.532 EOF 00:23:56.532 )") 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.532 { 00:23:56.532 "params": { 00:23:56.532 "name": "Nvme$subsystem", 00:23:56.532 "trtype": "$TEST_TRANSPORT", 00:23:56.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.532 "adrfam": "ipv4", 00:23:56.532 "trsvcid": "$NVMF_PORT", 00:23:56.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.532 "hdgst": ${hdgst:-false}, 00:23:56.532 "ddgst": ${ddgst:-false} 00:23:56.532 }, 00:23:56.532 "method": "bdev_nvme_attach_controller" 00:23:56.532 } 00:23:56.532 EOF 00:23:56.532 )") 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.532 { 00:23:56.532 "params": { 00:23:56.532 "name": "Nvme$subsystem", 00:23:56.532 "trtype": "$TEST_TRANSPORT", 00:23:56.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.532 "adrfam": "ipv4", 00:23:56.532 "trsvcid": "$NVMF_PORT", 00:23:56.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.532 "hdgst": ${hdgst:-false}, 00:23:56.532 "ddgst": ${ddgst:-false} 00:23:56.532 }, 00:23:56.532 "method": "bdev_nvme_attach_controller" 00:23:56.532 } 00:23:56.532 EOF 00:23:56.532 )") 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.532 { 00:23:56.532 "params": { 00:23:56.532 "name": "Nvme$subsystem", 00:23:56.532 "trtype": "$TEST_TRANSPORT", 00:23:56.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.532 "adrfam": "ipv4", 00:23:56.532 "trsvcid": "$NVMF_PORT", 00:23:56.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.532 "hdgst": ${hdgst:-false}, 00:23:56.532 "ddgst": ${ddgst:-false} 00:23:56.532 }, 00:23:56.532 "method": "bdev_nvme_attach_controller" 00:23:56.532 } 00:23:56.532 EOF 00:23:56.532 )") 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.532 { 00:23:56.532 "params": { 00:23:56.532 "name": "Nvme$subsystem", 00:23:56.532 "trtype": "$TEST_TRANSPORT", 00:23:56.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.532 "adrfam": "ipv4", 00:23:56.532 "trsvcid": "$NVMF_PORT", 00:23:56.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.532 "hdgst": ${hdgst:-false}, 00:23:56.532 "ddgst": ${ddgst:-false} 00:23:56.532 }, 00:23:56.532 "method": "bdev_nvme_attach_controller" 00:23:56.532 } 00:23:56.532 EOF 00:23:56.532 )") 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.532 { 00:23:56.532 "params": { 00:23:56.532 "name": "Nvme$subsystem", 00:23:56.532 "trtype": "$TEST_TRANSPORT", 00:23:56.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.532 "adrfam": "ipv4", 00:23:56.532 "trsvcid": "$NVMF_PORT", 00:23:56.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.532 "hdgst": ${hdgst:-false}, 00:23:56.532 "ddgst": ${ddgst:-false} 00:23:56.532 }, 00:23:56.532 "method": "bdev_nvme_attach_controller" 00:23:56.532 } 00:23:56.532 EOF 00:23:56.532 )") 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.532 { 00:23:56.532 "params": { 00:23:56.532 "name": "Nvme$subsystem", 00:23:56.532 "trtype": "$TEST_TRANSPORT", 00:23:56.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.532 "adrfam": "ipv4", 00:23:56.532 "trsvcid": "$NVMF_PORT", 00:23:56.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.532 "hdgst": ${hdgst:-false}, 00:23:56.532 "ddgst": ${ddgst:-false} 00:23:56.532 }, 00:23:56.532 "method": "bdev_nvme_attach_controller" 00:23:56.532 } 00:23:56.532 EOF 00:23:56.532 )") 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.532 { 00:23:56.532 "params": { 00:23:56.532 "name": "Nvme$subsystem", 00:23:56.532 "trtype": "$TEST_TRANSPORT", 00:23:56.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.532 "adrfam": "ipv4", 00:23:56.532 "trsvcid": "$NVMF_PORT", 00:23:56.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.532 "hdgst": ${hdgst:-false}, 00:23:56.532 "ddgst": ${ddgst:-false} 00:23:56.532 }, 00:23:56.532 "method": "bdev_nvme_attach_controller" 00:23:56.532 } 00:23:56.532 EOF 00:23:56.532 )") 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:56.532 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.533 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.533 { 00:23:56.533 "params": { 00:23:56.533 "name": "Nvme$subsystem", 00:23:56.533 "trtype": "$TEST_TRANSPORT", 00:23:56.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.533 "adrfam": "ipv4", 00:23:56.533 "trsvcid": "$NVMF_PORT", 00:23:56.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.533 "hdgst": ${hdgst:-false}, 00:23:56.533 "ddgst": ${ddgst:-false} 00:23:56.533 }, 00:23:56.533 "method": "bdev_nvme_attach_controller" 00:23:56.533 } 00:23:56.533 EOF 00:23:56.533 )") 00:23:56.533 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:56.533 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.533 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.533 { 00:23:56.533 "params": { 00:23:56.533 "name": "Nvme$subsystem", 00:23:56.533 "trtype": "$TEST_TRANSPORT", 00:23:56.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.533 "adrfam": "ipv4", 00:23:56.533 "trsvcid": "$NVMF_PORT", 00:23:56.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.533 "hdgst": ${hdgst:-false}, 00:23:56.533 "ddgst": ${ddgst:-false} 00:23:56.533 }, 00:23:56.533 "method": "bdev_nvme_attach_controller" 00:23:56.533 } 00:23:56.533 EOF 00:23:56.533 )") 00:23:56.533 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:56.533 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:56.533 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:56.533 20:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:56.533 "params": { 00:23:56.533 "name": "Nvme1", 00:23:56.533 "trtype": "tcp", 00:23:56.533 "traddr": "10.0.0.2", 00:23:56.533 "adrfam": "ipv4", 00:23:56.533 "trsvcid": "4420", 00:23:56.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:56.533 "hdgst": false, 00:23:56.533 "ddgst": false 00:23:56.533 }, 00:23:56.533 "method": "bdev_nvme_attach_controller" 00:23:56.533 },{ 00:23:56.533 "params": { 00:23:56.533 "name": "Nvme2", 00:23:56.533 "trtype": "tcp", 00:23:56.533 "traddr": "10.0.0.2", 00:23:56.533 "adrfam": "ipv4", 00:23:56.533 "trsvcid": "4420", 00:23:56.533 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:56.533 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:56.533 "hdgst": false, 00:23:56.533 "ddgst": false 00:23:56.533 }, 00:23:56.533 "method": "bdev_nvme_attach_controller" 00:23:56.533 },{ 00:23:56.533 "params": { 00:23:56.533 "name": "Nvme3", 00:23:56.533 "trtype": "tcp", 00:23:56.533 "traddr": "10.0.0.2", 00:23:56.533 "adrfam": "ipv4", 00:23:56.533 "trsvcid": "4420", 00:23:56.533 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:56.533 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:56.533 "hdgst": false, 00:23:56.533 "ddgst": false 00:23:56.533 }, 00:23:56.533 "method": "bdev_nvme_attach_controller" 00:23:56.533 },{ 00:23:56.533 "params": { 00:23:56.533 "name": "Nvme4", 00:23:56.533 "trtype": "tcp", 00:23:56.533 "traddr": "10.0.0.2", 00:23:56.533 "adrfam": "ipv4", 00:23:56.533 "trsvcid": "4420", 00:23:56.533 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:56.533 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:56.533 "hdgst": false, 00:23:56.533 "ddgst": false 00:23:56.533 }, 00:23:56.533 "method": "bdev_nvme_attach_controller" 00:23:56.533 },{ 00:23:56.533 "params": { 00:23:56.533 "name": "Nvme5", 00:23:56.533 "trtype": "tcp", 00:23:56.533 "traddr": "10.0.0.2", 00:23:56.533 "adrfam": "ipv4", 00:23:56.533 "trsvcid": "4420", 00:23:56.533 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:56.533 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:56.533 "hdgst": false, 00:23:56.533 "ddgst": false 00:23:56.533 }, 00:23:56.533 "method": "bdev_nvme_attach_controller" 00:23:56.533 },{ 00:23:56.533 "params": { 00:23:56.533 "name": "Nvme6", 00:23:56.533 "trtype": "tcp", 00:23:56.533 "traddr": "10.0.0.2", 00:23:56.533 "adrfam": "ipv4", 00:23:56.533 "trsvcid": "4420", 00:23:56.533 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:56.533 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:56.533 "hdgst": false, 00:23:56.533 "ddgst": false 00:23:56.533 }, 00:23:56.533 "method": "bdev_nvme_attach_controller" 00:23:56.533 },{ 00:23:56.533 "params": { 00:23:56.533 "name": "Nvme7", 00:23:56.533 "trtype": "tcp", 00:23:56.533 "traddr": "10.0.0.2", 00:23:56.533 "adrfam": "ipv4", 00:23:56.533 "trsvcid": "4420", 00:23:56.533 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:56.533 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:56.533 "hdgst": false, 00:23:56.533 "ddgst": false 00:23:56.533 }, 00:23:56.533 "method": "bdev_nvme_attach_controller" 00:23:56.533 },{ 00:23:56.533 "params": { 00:23:56.533 "name": "Nvme8", 00:23:56.533 "trtype": "tcp", 00:23:56.533 "traddr": "10.0.0.2", 00:23:56.533 "adrfam": "ipv4", 00:23:56.533 "trsvcid": "4420", 00:23:56.533 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:56.533 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:56.533 "hdgst": false, 00:23:56.533 "ddgst": false 00:23:56.533 }, 00:23:56.533 "method": "bdev_nvme_attach_controller" 00:23:56.533 },{ 00:23:56.533 "params": { 00:23:56.533 "name": "Nvme9", 00:23:56.533 "trtype": "tcp", 00:23:56.533 "traddr": "10.0.0.2", 00:23:56.533 "adrfam": "ipv4", 00:23:56.533 "trsvcid": "4420", 00:23:56.533 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:56.533 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:56.533 "hdgst": false, 00:23:56.533 "ddgst": false 00:23:56.533 }, 00:23:56.533 "method": "bdev_nvme_attach_controller" 00:23:56.533 },{ 00:23:56.533 "params": { 00:23:56.533 "name": "Nvme10", 00:23:56.533 "trtype": "tcp", 00:23:56.533 "traddr": "10.0.0.2", 00:23:56.533 "adrfam": "ipv4", 00:23:56.533 "trsvcid": "4420", 00:23:56.533 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:56.533 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:56.533 "hdgst": false, 00:23:56.533 "ddgst": false 00:23:56.533 }, 00:23:56.533 "method": "bdev_nvme_attach_controller" 00:23:56.533 }' 00:23:56.533 [2024-07-24 20:18:00.284246] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:23:56.533 [2024-07-24 20:18:00.284345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2102204 ] 00:23:56.791 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.791 [2024-07-24 20:18:00.371053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.791 [2024-07-24 20:18:00.511735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.690 Running I/O for 1 seconds... 00:23:59.624 00:23:59.624 Latency(us) 00:23:59.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.624 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.624 Verification LBA range: start 0x0 length 0x400 00:23:59.624 Nvme1n1 : 1.10 174.80 10.92 0.00 0.00 360070.26 25437.68 332437.43 00:23:59.624 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.624 Verification LBA range: start 0x0 length 0x400 00:23:59.624 Nvme2n1 : 1.18 162.62 10.16 0.00 0.00 379556.60 28738.75 351078.78 00:23:59.624 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.624 Verification LBA range: start 0x0 length 0x400 00:23:59.624 Nvme3n1 : 1.09 180.48 11.28 0.00 0.00 331208.44 6796.33 343311.55 00:23:59.624 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.624 Verification LBA range: start 0x0 length 0x400 00:23:59.624 Nvme4n1 : 1.09 175.57 10.97 0.00 0.00 334713.49 42137.22 316902.97 00:23:59.624 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.624 Verification LBA range: start 0x0 length 0x400 00:23:59.624 Nvme5n1 : 1.19 165.54 10.35 0.00 0.00 348557.69 3203.98 347971.89 00:23:59.624 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.624 Verification LBA range: start 0x0 length 0x400 00:23:59.624 Nvme6n1 : 1.14 168.92 10.56 0.00 0.00 332156.02 23787.14 313796.08 00:23:59.624 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.624 Verification LBA range: start 0x0 length 0x400 00:23:59.624 Nvme7n1 : 1.27 202.16 12.63 0.00 0.00 275370.86 17864.63 343311.55 00:23:59.624 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.624 Verification LBA range: start 0x0 length 0x400 00:23:59.624 Nvme8n1 : 1.27 203.94 12.75 0.00 0.00 267116.05 2791.35 316902.97 00:23:59.624 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.624 Verification LBA range: start 0x0 length 0x400 00:23:59.624 Nvme9n1 : 1.28 199.24 12.45 0.00 0.00 267944.01 17282.09 363506.35 00:23:59.624 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.624 Verification LBA range: start 0x0 length 0x400 00:23:59.624 Nvme10n1 : 1.29 198.28 12.39 0.00 0.00 263385.69 17282.09 385254.59 00:23:59.624 =================================================================================================================== 00:23:59.624 Total : 1831.54 114.47 0.00 0.00 310459.36 2791.35 385254.59 00:23:59.882 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:59.882 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:59.882 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:59.882 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:59.882 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:59.882 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:59.882 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:59.882 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:59.882 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:59.882 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:59.882 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:59.882 rmmod nvme_tcp 00:23:59.882 rmmod nvme_fabrics 00:23:59.882 rmmod nvme_keyring 00:24:00.140 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:00.140 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:24:00.140 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:24:00.140 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2101679 ']' 00:24:00.140 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2101679 00:24:00.140 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 2101679 ']' 00:24:00.140 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 2101679 00:24:00.140 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:24:00.140 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:00.140 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2101679 00:24:00.140 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:00.140 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:00.140 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2101679' 00:24:00.140 killing process with pid 2101679 00:24:00.140 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 2101679 00:24:00.140 20:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 2101679 00:24:00.706 20:18:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:00.706 20:18:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:00.706 20:18:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:00.706 20:18:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:00.706 20:18:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:00.706 20:18:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.706 20:18:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.706 20:18:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:03.239 00:24:03.239 real 0m12.576s 00:24:03.239 user 0m34.840s 00:24:03.239 sys 0m3.781s 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:03.239 ************************************ 00:24:03.239 END TEST nvmf_shutdown_tc1 00:24:03.239 ************************************ 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:03.239 ************************************ 00:24:03.239 START TEST nvmf_shutdown_tc2 00:24:03.239 ************************************ 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.239 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:03.240 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:03.240 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:03.240 Found net devices under 0000:84:00.0: cvl_0_0 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:03.240 Found net devices under 0000:84:00.1: cvl_0_1 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:03.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:24:03.240 00:24:03.240 --- 10.0.0.2 ping statistics --- 00:24:03.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.240 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:24:03.240 00:24:03.240 --- 10.0.0.1 ping statistics --- 00:24:03.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.240 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:24:03.240 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.241 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:24:03.241 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:03.241 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.241 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:03.241 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:03.241 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.241 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:03.241 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:03.241 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:03.241 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:03.241 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:03.241 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:03.241 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2103057 00:24:03.241 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:03.241 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2103057 00:24:03.241 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2103057 ']' 00:24:03.241 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.241 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:03.241 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.241 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:03.241 20:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:03.241 [2024-07-24 20:18:06.768657] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:24:03.241 [2024-07-24 20:18:06.768761] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.241 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.241 [2024-07-24 20:18:06.858399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:03.241 [2024-07-24 20:18:07.001506] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.241 [2024-07-24 20:18:07.001571] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.241 [2024-07-24 20:18:07.001590] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.241 [2024-07-24 20:18:07.001606] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.241 [2024-07-24 20:18:07.001621] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.241 [2024-07-24 20:18:07.001727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.241 [2024-07-24 20:18:07.001788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:03.241 [2024-07-24 20:18:07.001862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:03.241 [2024-07-24 20:18:07.001865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:03.500 [2024-07-24 20:18:07.170497] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.500 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:03.500 Malloc1 00:24:03.500 [2024-07-24 20:18:07.250405] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.500 Malloc2 00:24:03.760 Malloc3 00:24:03.760 Malloc4 00:24:03.760 Malloc5 00:24:03.760 Malloc6 00:24:03.760 Malloc7 00:24:04.037 Malloc8 00:24:04.037 Malloc9 00:24:04.037 Malloc10 00:24:04.037 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.037 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:04.037 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:04.037 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:04.037 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2103335 00:24:04.037 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2103335 /var/tmp/bdevperf.sock 00:24:04.037 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2103335 ']' 00:24:04.037 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:04.037 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:04.037 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:04.037 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:04.037 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:24:04.037 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:04.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:04.037 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:24:04.037 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:04.037 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:04.037 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:04.038 { 00:24:04.038 "params": { 00:24:04.038 "name": "Nvme$subsystem", 00:24:04.038 "trtype": "$TEST_TRANSPORT", 00:24:04.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.038 "adrfam": "ipv4", 00:24:04.038 "trsvcid": "$NVMF_PORT", 00:24:04.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.038 "hdgst": ${hdgst:-false}, 00:24:04.038 "ddgst": ${ddgst:-false} 00:24:04.038 }, 00:24:04.038 "method": "bdev_nvme_attach_controller" 00:24:04.038 } 00:24:04.038 EOF 00:24:04.038 )") 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:04.038 { 00:24:04.038 "params": { 00:24:04.038 "name": "Nvme$subsystem", 00:24:04.038 "trtype": "$TEST_TRANSPORT", 00:24:04.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.038 "adrfam": "ipv4", 00:24:04.038 "trsvcid": "$NVMF_PORT", 00:24:04.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.038 "hdgst": ${hdgst:-false}, 00:24:04.038 "ddgst": ${ddgst:-false} 00:24:04.038 }, 00:24:04.038 "method": "bdev_nvme_attach_controller" 00:24:04.038 } 00:24:04.038 EOF 00:24:04.038 )") 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:04.038 { 00:24:04.038 "params": { 00:24:04.038 "name": "Nvme$subsystem", 00:24:04.038 "trtype": "$TEST_TRANSPORT", 00:24:04.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.038 "adrfam": "ipv4", 00:24:04.038 "trsvcid": "$NVMF_PORT", 00:24:04.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.038 "hdgst": ${hdgst:-false}, 00:24:04.038 "ddgst": ${ddgst:-false} 00:24:04.038 }, 00:24:04.038 "method": "bdev_nvme_attach_controller" 00:24:04.038 } 00:24:04.038 EOF 00:24:04.038 )") 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:04.038 { 00:24:04.038 "params": { 00:24:04.038 "name": "Nvme$subsystem", 00:24:04.038 "trtype": "$TEST_TRANSPORT", 00:24:04.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.038 "adrfam": "ipv4", 00:24:04.038 "trsvcid": "$NVMF_PORT", 00:24:04.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.038 "hdgst": ${hdgst:-false}, 00:24:04.038 "ddgst": ${ddgst:-false} 00:24:04.038 }, 00:24:04.038 "method": "bdev_nvme_attach_controller" 00:24:04.038 } 00:24:04.038 EOF 00:24:04.038 )") 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:04.038 { 00:24:04.038 "params": { 00:24:04.038 "name": "Nvme$subsystem", 00:24:04.038 "trtype": "$TEST_TRANSPORT", 00:24:04.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.038 "adrfam": "ipv4", 00:24:04.038 "trsvcid": "$NVMF_PORT", 00:24:04.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.038 "hdgst": ${hdgst:-false}, 00:24:04.038 "ddgst": ${ddgst:-false} 00:24:04.038 }, 00:24:04.038 "method": "bdev_nvme_attach_controller" 00:24:04.038 } 00:24:04.038 EOF 00:24:04.038 )") 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:04.038 { 00:24:04.038 "params": { 00:24:04.038 "name": "Nvme$subsystem", 00:24:04.038 "trtype": "$TEST_TRANSPORT", 00:24:04.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.038 "adrfam": "ipv4", 00:24:04.038 "trsvcid": "$NVMF_PORT", 00:24:04.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.038 "hdgst": ${hdgst:-false}, 00:24:04.038 "ddgst": ${ddgst:-false} 00:24:04.038 }, 00:24:04.038 "method": "bdev_nvme_attach_controller" 00:24:04.038 } 00:24:04.038 EOF 00:24:04.038 )") 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:04.038 { 00:24:04.038 "params": { 00:24:04.038 "name": "Nvme$subsystem", 00:24:04.038 "trtype": "$TEST_TRANSPORT", 00:24:04.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.038 "adrfam": "ipv4", 00:24:04.038 "trsvcid": "$NVMF_PORT", 00:24:04.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.038 "hdgst": ${hdgst:-false}, 00:24:04.038 "ddgst": ${ddgst:-false} 00:24:04.038 }, 00:24:04.038 "method": "bdev_nvme_attach_controller" 00:24:04.038 } 00:24:04.038 EOF 00:24:04.038 )") 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:04.038 { 00:24:04.038 "params": { 00:24:04.038 "name": "Nvme$subsystem", 00:24:04.038 "trtype": "$TEST_TRANSPORT", 00:24:04.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.038 "adrfam": "ipv4", 00:24:04.038 "trsvcid": "$NVMF_PORT", 00:24:04.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.038 "hdgst": ${hdgst:-false}, 00:24:04.038 "ddgst": ${ddgst:-false} 00:24:04.038 }, 00:24:04.038 "method": "bdev_nvme_attach_controller" 00:24:04.038 } 00:24:04.038 EOF 00:24:04.038 )") 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:04.038 { 00:24:04.038 "params": { 00:24:04.038 "name": "Nvme$subsystem", 00:24:04.038 "trtype": "$TEST_TRANSPORT", 00:24:04.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.038 "adrfam": "ipv4", 00:24:04.038 "trsvcid": "$NVMF_PORT", 00:24:04.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.038 "hdgst": ${hdgst:-false}, 00:24:04.038 "ddgst": ${ddgst:-false} 00:24:04.038 }, 00:24:04.038 "method": "bdev_nvme_attach_controller" 00:24:04.038 } 00:24:04.038 EOF 00:24:04.038 )") 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:04.038 { 00:24:04.038 "params": { 00:24:04.038 "name": "Nvme$subsystem", 00:24:04.038 "trtype": "$TEST_TRANSPORT", 00:24:04.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.038 "adrfam": "ipv4", 00:24:04.038 "trsvcid": "$NVMF_PORT", 00:24:04.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.038 "hdgst": ${hdgst:-false}, 00:24:04.038 "ddgst": ${ddgst:-false} 00:24:04.038 }, 00:24:04.038 "method": "bdev_nvme_attach_controller" 00:24:04.038 } 00:24:04.038 EOF 00:24:04.038 )") 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:24:04.038 20:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:04.038 "params": { 00:24:04.038 "name": "Nvme1", 00:24:04.038 "trtype": "tcp", 00:24:04.039 "traddr": "10.0.0.2", 00:24:04.039 "adrfam": "ipv4", 00:24:04.039 "trsvcid": "4420", 00:24:04.039 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.039 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:04.039 "hdgst": false, 00:24:04.039 "ddgst": false 00:24:04.039 }, 00:24:04.039 "method": "bdev_nvme_attach_controller" 00:24:04.039 },{ 00:24:04.039 "params": { 00:24:04.039 "name": "Nvme2", 00:24:04.039 "trtype": "tcp", 00:24:04.039 "traddr": "10.0.0.2", 00:24:04.039 "adrfam": "ipv4", 00:24:04.039 "trsvcid": "4420", 00:24:04.039 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:04.039 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:04.039 "hdgst": false, 00:24:04.039 "ddgst": false 00:24:04.039 }, 00:24:04.039 "method": "bdev_nvme_attach_controller" 00:24:04.039 },{ 00:24:04.039 "params": { 00:24:04.039 "name": "Nvme3", 00:24:04.039 "trtype": "tcp", 00:24:04.039 "traddr": "10.0.0.2", 00:24:04.039 "adrfam": "ipv4", 00:24:04.039 "trsvcid": "4420", 00:24:04.039 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:04.039 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:04.039 "hdgst": false, 00:24:04.039 "ddgst": false 00:24:04.039 }, 00:24:04.039 "method": "bdev_nvme_attach_controller" 00:24:04.039 },{ 00:24:04.039 "params": { 00:24:04.039 "name": "Nvme4", 00:24:04.039 "trtype": "tcp", 00:24:04.039 "traddr": "10.0.0.2", 00:24:04.039 "adrfam": "ipv4", 00:24:04.039 "trsvcid": "4420", 00:24:04.039 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:04.039 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:04.039 "hdgst": false, 00:24:04.039 "ddgst": false 00:24:04.039 }, 00:24:04.039 "method": "bdev_nvme_attach_controller" 00:24:04.039 },{ 00:24:04.039 "params": { 00:24:04.039 "name": "Nvme5", 00:24:04.039 "trtype": "tcp", 00:24:04.039 "traddr": "10.0.0.2", 00:24:04.039 "adrfam": "ipv4", 00:24:04.039 "trsvcid": "4420", 00:24:04.039 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:04.039 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:04.039 "hdgst": false, 00:24:04.039 "ddgst": false 00:24:04.039 }, 00:24:04.039 "method": "bdev_nvme_attach_controller" 00:24:04.039 },{ 00:24:04.039 "params": { 00:24:04.039 "name": "Nvme6", 00:24:04.039 "trtype": "tcp", 00:24:04.039 "traddr": "10.0.0.2", 00:24:04.039 "adrfam": "ipv4", 00:24:04.039 "trsvcid": "4420", 00:24:04.039 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:04.039 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:04.039 "hdgst": false, 00:24:04.039 "ddgst": false 00:24:04.039 }, 00:24:04.039 "method": "bdev_nvme_attach_controller" 00:24:04.039 },{ 00:24:04.039 "params": { 00:24:04.039 "name": "Nvme7", 00:24:04.039 "trtype": "tcp", 00:24:04.039 "traddr": "10.0.0.2", 00:24:04.039 "adrfam": "ipv4", 00:24:04.039 "trsvcid": "4420", 00:24:04.039 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:04.039 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:04.039 "hdgst": false, 00:24:04.039 "ddgst": false 00:24:04.039 }, 00:24:04.039 "method": "bdev_nvme_attach_controller" 00:24:04.039 },{ 00:24:04.039 "params": { 00:24:04.039 "name": "Nvme8", 00:24:04.039 "trtype": "tcp", 00:24:04.039 "traddr": "10.0.0.2", 00:24:04.039 "adrfam": "ipv4", 00:24:04.039 "trsvcid": "4420", 00:24:04.039 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:04.039 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:04.039 "hdgst": false, 00:24:04.039 "ddgst": false 00:24:04.039 }, 00:24:04.039 "method": "bdev_nvme_attach_controller" 00:24:04.039 },{ 00:24:04.039 "params": { 00:24:04.039 "name": "Nvme9", 00:24:04.039 "trtype": "tcp", 00:24:04.039 "traddr": "10.0.0.2", 00:24:04.039 "adrfam": "ipv4", 00:24:04.039 "trsvcid": "4420", 00:24:04.039 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:04.039 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:04.039 "hdgst": false, 00:24:04.039 "ddgst": false 00:24:04.039 }, 00:24:04.039 "method": "bdev_nvme_attach_controller" 00:24:04.039 },{ 00:24:04.039 "params": { 00:24:04.039 "name": "Nvme10", 00:24:04.039 "trtype": "tcp", 00:24:04.039 "traddr": "10.0.0.2", 00:24:04.039 "adrfam": "ipv4", 00:24:04.039 "trsvcid": "4420", 00:24:04.039 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:04.039 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:04.039 "hdgst": false, 00:24:04.039 "ddgst": false 00:24:04.039 }, 00:24:04.039 "method": "bdev_nvme_attach_controller" 00:24:04.039 }' 00:24:04.039 [2024-07-24 20:18:07.791229] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:24:04.039 [2024-07-24 20:18:07.791316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2103335 ] 00:24:04.303 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.303 [2024-07-24 20:18:07.867329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.303 [2024-07-24 20:18:08.010049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.220 Running I/O for 10 seconds... 00:24:06.220 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:06.220 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:24:06.220 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:06.220 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.478 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:06.478 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.478 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:06.478 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:06.478 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:06.478 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:24:06.478 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:24:06.478 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:06.478 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:06.478 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:06.478 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:06.478 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.478 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:06.478 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.478 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:06.478 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:06.478 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:06.737 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:06.737 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:06.737 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:06.737 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:06.737 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.737 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:06.737 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.737 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=72 00:24:06.737 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 72 -ge 100 ']' 00:24:06.737 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:06.995 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:06.995 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:06.995 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:06.995 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:06.995 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.995 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:06.995 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.995 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=136 00:24:06.995 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 136 -ge 100 ']' 00:24:06.995 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:24:06.995 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:24:06.995 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:24:06.995 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2103335 00:24:06.995 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2103335 ']' 00:24:06.995 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2103335 00:24:06.995 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:24:06.995 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:06.995 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2103335 00:24:06.995 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:06.995 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:06.995 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2103335' 00:24:06.995 killing process with pid 2103335 00:24:06.995 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2103335 00:24:06.995 20:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2103335 00:24:07.254 Received shutdown signal, test time was about 1.057284 seconds 00:24:07.254 00:24:07.254 Latency(us) 00:24:07.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.254 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.254 Verification LBA range: start 0x0 length 0x400 00:24:07.254 Nvme1n1 : 1.05 193.17 12.07 0.00 0.00 324172.97 7427.41 388361.48 00:24:07.254 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.254 Verification LBA range: start 0x0 length 0x400 00:24:07.254 Nvme2n1 : 1.00 127.81 7.99 0.00 0.00 481327.03 61749.48 344865.00 00:24:07.254 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.254 Verification LBA range: start 0x0 length 0x400 00:24:07.254 Nvme3n1 : 1.05 182.37 11.40 0.00 0.00 329455.94 31457.28 416323.51 00:24:07.254 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.254 Verification LBA range: start 0x0 length 0x400 00:24:07.254 Nvme4n1 : 1.04 188.02 11.75 0.00 0.00 309410.66 5606.97 403895.94 00:24:07.254 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.254 Verification LBA range: start 0x0 length 0x400 00:24:07.254 Nvme5n1 : 1.02 125.64 7.85 0.00 0.00 452963.75 42525.58 419430.40 00:24:07.254 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.254 Verification LBA range: start 0x0 length 0x400 00:24:07.254 Nvme6n1 : 1.06 181.80 11.36 0.00 0.00 305778.54 25826.04 403895.94 00:24:07.254 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.254 Verification LBA range: start 0x0 length 0x400 00:24:07.254 Nvme7n1 : 1.04 191.17 11.95 0.00 0.00 280485.66 8738.13 380594.25 00:24:07.254 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.254 Verification LBA range: start 0x0 length 0x400 00:24:07.254 Nvme8n1 : 1.01 127.32 7.96 0.00 0.00 409458.92 28350.39 368166.68 00:24:07.254 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.254 Verification LBA range: start 0x0 length 0x400 00:24:07.254 Nvme9n1 : 0.99 128.81 8.05 0.00 0.00 391789.23 34369.99 393021.82 00:24:07.254 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.254 Verification LBA range: start 0x0 length 0x400 00:24:07.254 Nvme10n1 : 1.02 125.17 7.82 0.00 0.00 393939.63 31263.10 450499.32 00:24:07.254 =================================================================================================================== 00:24:07.254 Total : 1571.28 98.21 0.00 0.00 355633.78 5606.97 450499.32 00:24:07.513 20:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:24:08.446 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2103057 00:24:08.446 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:24:08.446 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:08.446 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:08.446 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:08.446 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:08.446 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:08.446 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:24:08.446 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:08.446 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:24:08.446 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:08.446 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:08.446 rmmod nvme_tcp 00:24:08.446 rmmod nvme_fabrics 00:24:08.446 rmmod nvme_keyring 00:24:08.446 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:08.446 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:24:08.446 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:24:08.446 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2103057 ']' 00:24:08.446 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2103057 00:24:08.446 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2103057 ']' 00:24:08.446 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2103057 00:24:08.446 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:24:08.447 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:08.447 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2103057 00:24:08.447 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:08.447 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:08.447 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2103057' 00:24:08.447 killing process with pid 2103057 00:24:08.447 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2103057 00:24:08.447 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2103057 00:24:09.380 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:09.380 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:09.380 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:09.380 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:09.380 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:09.380 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.380 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.380 20:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.280 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:11.280 00:24:11.280 real 0m8.366s 00:24:11.280 user 0m26.146s 00:24:11.280 sys 0m1.574s 00:24:11.280 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:11.280 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:11.280 ************************************ 00:24:11.280 END TEST nvmf_shutdown_tc2 00:24:11.280 ************************************ 00:24:11.280 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:11.280 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:11.280 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:11.280 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:11.280 ************************************ 00:24:11.280 START TEST nvmf_shutdown_tc3 00:24:11.280 ************************************ 00:24:11.280 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:24:11.280 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:24:11.280 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:11.280 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:11.280 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:11.280 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:11.280 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:11.280 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:11.280 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.280 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.280 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.280 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:11.281 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:11.281 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:11.281 Found net devices under 0000:84:00.0: cvl_0_0 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:11.281 Found net devices under 0000:84:00.1: cvl_0_1 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:11.281 20:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:11.281 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:11.281 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:11.281 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:11.281 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:11.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:11.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:24:11.540 00:24:11.540 --- 10.0.0.2 ping statistics --- 00:24:11.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.540 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:11.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:11.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:24:11.540 00:24:11.540 --- 10.0.0.1 ping statistics --- 00:24:11.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.540 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2104789 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2104789 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2104789 ']' 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:11.540 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:11.540 [2024-07-24 20:18:15.259415] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:24:11.540 [2024-07-24 20:18:15.259578] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.799 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.799 [2024-07-24 20:18:15.382554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:11.799 [2024-07-24 20:18:15.525489] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.799 [2024-07-24 20:18:15.525566] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.799 [2024-07-24 20:18:15.525599] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.799 [2024-07-24 20:18:15.525626] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.799 [2024-07-24 20:18:15.525649] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.799 [2024-07-24 20:18:15.525772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.799 [2024-07-24 20:18:15.525842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:11.799 [2024-07-24 20:18:15.525897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:11.799 [2024-07-24 20:18:15.525912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:12.057 [2024-07-24 20:18:15.731624] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:12.057 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.058 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:12.058 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:12.058 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.058 20:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:12.058 Malloc1 00:24:12.058 [2024-07-24 20:18:15.811496] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.058 Malloc2 00:24:12.315 Malloc3 00:24:12.315 Malloc4 00:24:12.315 Malloc5 00:24:12.315 Malloc6 00:24:12.315 Malloc7 00:24:12.574 Malloc8 00:24:12.574 Malloc9 00:24:12.574 Malloc10 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2104968 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2104968 /var/tmp/bdevperf.sock 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2104968 ']' 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.574 { 00:24:12.574 "params": { 00:24:12.574 "name": "Nvme$subsystem", 00:24:12.574 "trtype": "$TEST_TRANSPORT", 00:24:12.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.574 "adrfam": "ipv4", 00:24:12.574 "trsvcid": "$NVMF_PORT", 00:24:12.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.574 "hdgst": ${hdgst:-false}, 00:24:12.574 "ddgst": ${ddgst:-false} 00:24:12.574 }, 00:24:12.574 "method": "bdev_nvme_attach_controller" 00:24:12.574 } 00:24:12.574 EOF 00:24:12.574 )") 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.574 { 00:24:12.574 "params": { 00:24:12.574 "name": "Nvme$subsystem", 00:24:12.574 "trtype": "$TEST_TRANSPORT", 00:24:12.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.574 "adrfam": "ipv4", 00:24:12.574 "trsvcid": "$NVMF_PORT", 00:24:12.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.574 "hdgst": ${hdgst:-false}, 00:24:12.574 "ddgst": ${ddgst:-false} 00:24:12.574 }, 00:24:12.574 "method": "bdev_nvme_attach_controller" 00:24:12.574 } 00:24:12.574 EOF 00:24:12.574 )") 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.574 { 00:24:12.574 "params": { 00:24:12.574 "name": "Nvme$subsystem", 00:24:12.574 "trtype": "$TEST_TRANSPORT", 00:24:12.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.574 "adrfam": "ipv4", 00:24:12.574 "trsvcid": "$NVMF_PORT", 00:24:12.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.574 "hdgst": ${hdgst:-false}, 00:24:12.574 "ddgst": ${ddgst:-false} 00:24:12.574 }, 00:24:12.574 "method": "bdev_nvme_attach_controller" 00:24:12.574 } 00:24:12.574 EOF 00:24:12.574 )") 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.574 { 00:24:12.574 "params": { 00:24:12.574 "name": "Nvme$subsystem", 00:24:12.574 "trtype": "$TEST_TRANSPORT", 00:24:12.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.574 "adrfam": "ipv4", 00:24:12.574 "trsvcid": "$NVMF_PORT", 00:24:12.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.574 "hdgst": ${hdgst:-false}, 00:24:12.574 "ddgst": ${ddgst:-false} 00:24:12.574 }, 00:24:12.574 "method": "bdev_nvme_attach_controller" 00:24:12.574 } 00:24:12.574 EOF 00:24:12.574 )") 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.574 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.574 { 00:24:12.574 "params": { 00:24:12.574 "name": "Nvme$subsystem", 00:24:12.574 "trtype": "$TEST_TRANSPORT", 00:24:12.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.574 "adrfam": "ipv4", 00:24:12.574 "trsvcid": "$NVMF_PORT", 00:24:12.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.574 "hdgst": ${hdgst:-false}, 00:24:12.574 "ddgst": ${ddgst:-false} 00:24:12.574 }, 00:24:12.574 "method": "bdev_nvme_attach_controller" 00:24:12.574 } 00:24:12.575 EOF 00:24:12.575 )") 00:24:12.575 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:12.575 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.575 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.575 { 00:24:12.575 "params": { 00:24:12.575 "name": "Nvme$subsystem", 00:24:12.575 "trtype": "$TEST_TRANSPORT", 00:24:12.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.575 "adrfam": "ipv4", 00:24:12.575 "trsvcid": "$NVMF_PORT", 00:24:12.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.575 "hdgst": ${hdgst:-false}, 00:24:12.575 "ddgst": ${ddgst:-false} 00:24:12.575 }, 00:24:12.575 "method": "bdev_nvme_attach_controller" 00:24:12.575 } 00:24:12.575 EOF 00:24:12.575 )") 00:24:12.575 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:12.575 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.575 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.575 { 00:24:12.575 "params": { 00:24:12.575 "name": "Nvme$subsystem", 00:24:12.575 "trtype": "$TEST_TRANSPORT", 00:24:12.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.575 "adrfam": "ipv4", 00:24:12.575 "trsvcid": "$NVMF_PORT", 00:24:12.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.575 "hdgst": ${hdgst:-false}, 00:24:12.575 "ddgst": ${ddgst:-false} 00:24:12.575 }, 00:24:12.575 "method": "bdev_nvme_attach_controller" 00:24:12.575 } 00:24:12.575 EOF 00:24:12.575 )") 00:24:12.575 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:12.575 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.575 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.575 { 00:24:12.575 "params": { 00:24:12.575 "name": "Nvme$subsystem", 00:24:12.575 "trtype": "$TEST_TRANSPORT", 00:24:12.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.575 "adrfam": "ipv4", 00:24:12.575 "trsvcid": "$NVMF_PORT", 00:24:12.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.575 "hdgst": ${hdgst:-false}, 00:24:12.575 "ddgst": ${ddgst:-false} 00:24:12.575 }, 00:24:12.575 "method": "bdev_nvme_attach_controller" 00:24:12.575 } 00:24:12.575 EOF 00:24:12.575 )") 00:24:12.575 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:12.575 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.575 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.575 { 00:24:12.575 "params": { 00:24:12.575 "name": "Nvme$subsystem", 00:24:12.575 "trtype": "$TEST_TRANSPORT", 00:24:12.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.575 "adrfam": "ipv4", 00:24:12.575 "trsvcid": "$NVMF_PORT", 00:24:12.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.575 "hdgst": ${hdgst:-false}, 00:24:12.575 "ddgst": ${ddgst:-false} 00:24:12.575 }, 00:24:12.575 "method": "bdev_nvme_attach_controller" 00:24:12.575 } 00:24:12.575 EOF 00:24:12.575 )") 00:24:12.575 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:12.575 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.575 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.575 { 00:24:12.575 "params": { 00:24:12.575 "name": "Nvme$subsystem", 00:24:12.575 "trtype": "$TEST_TRANSPORT", 00:24:12.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.575 "adrfam": "ipv4", 00:24:12.575 "trsvcid": "$NVMF_PORT", 00:24:12.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.575 "hdgst": ${hdgst:-false}, 00:24:12.575 "ddgst": ${ddgst:-false} 00:24:12.575 }, 00:24:12.575 "method": "bdev_nvme_attach_controller" 00:24:12.575 } 00:24:12.575 EOF 00:24:12.575 )") 00:24:12.575 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:12.575 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:24:12.575 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:24:12.575 20:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:12.575 "params": { 00:24:12.575 "name": "Nvme1", 00:24:12.575 "trtype": "tcp", 00:24:12.575 "traddr": "10.0.0.2", 00:24:12.575 "adrfam": "ipv4", 00:24:12.575 "trsvcid": "4420", 00:24:12.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.575 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:12.575 "hdgst": false, 00:24:12.575 "ddgst": false 00:24:12.575 }, 00:24:12.575 "method": "bdev_nvme_attach_controller" 00:24:12.575 },{ 00:24:12.575 "params": { 00:24:12.575 "name": "Nvme2", 00:24:12.575 "trtype": "tcp", 00:24:12.575 "traddr": "10.0.0.2", 00:24:12.575 "adrfam": "ipv4", 00:24:12.575 "trsvcid": "4420", 00:24:12.575 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:12.575 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:12.575 "hdgst": false, 00:24:12.575 "ddgst": false 00:24:12.575 }, 00:24:12.575 "method": "bdev_nvme_attach_controller" 00:24:12.575 },{ 00:24:12.575 "params": { 00:24:12.575 "name": "Nvme3", 00:24:12.575 "trtype": "tcp", 00:24:12.575 "traddr": "10.0.0.2", 00:24:12.575 "adrfam": "ipv4", 00:24:12.575 "trsvcid": "4420", 00:24:12.575 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:12.575 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:12.575 "hdgst": false, 00:24:12.575 "ddgst": false 00:24:12.575 }, 00:24:12.575 "method": "bdev_nvme_attach_controller" 00:24:12.575 },{ 00:24:12.575 "params": { 00:24:12.575 "name": "Nvme4", 00:24:12.575 "trtype": "tcp", 00:24:12.575 "traddr": "10.0.0.2", 00:24:12.575 "adrfam": "ipv4", 00:24:12.575 "trsvcid": "4420", 00:24:12.575 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:12.575 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:12.575 "hdgst": false, 00:24:12.575 "ddgst": false 00:24:12.575 }, 00:24:12.575 "method": "bdev_nvme_attach_controller" 00:24:12.575 },{ 00:24:12.575 "params": { 00:24:12.575 "name": "Nvme5", 00:24:12.575 "trtype": "tcp", 00:24:12.575 "traddr": "10.0.0.2", 00:24:12.575 "adrfam": "ipv4", 00:24:12.575 "trsvcid": "4420", 00:24:12.575 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:12.575 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:12.575 "hdgst": false, 00:24:12.575 "ddgst": false 00:24:12.575 }, 00:24:12.575 "method": "bdev_nvme_attach_controller" 00:24:12.575 },{ 00:24:12.575 "params": { 00:24:12.575 "name": "Nvme6", 00:24:12.575 "trtype": "tcp", 00:24:12.575 "traddr": "10.0.0.2", 00:24:12.575 "adrfam": "ipv4", 00:24:12.575 "trsvcid": "4420", 00:24:12.575 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:12.575 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:12.575 "hdgst": false, 00:24:12.575 "ddgst": false 00:24:12.575 }, 00:24:12.575 "method": "bdev_nvme_attach_controller" 00:24:12.575 },{ 00:24:12.575 "params": { 00:24:12.575 "name": "Nvme7", 00:24:12.575 "trtype": "tcp", 00:24:12.575 "traddr": "10.0.0.2", 00:24:12.575 "adrfam": "ipv4", 00:24:12.575 "trsvcid": "4420", 00:24:12.575 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:12.575 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:12.575 "hdgst": false, 00:24:12.575 "ddgst": false 00:24:12.575 }, 00:24:12.575 "method": "bdev_nvme_attach_controller" 00:24:12.575 },{ 00:24:12.575 "params": { 00:24:12.575 "name": "Nvme8", 00:24:12.575 "trtype": "tcp", 00:24:12.575 "traddr": "10.0.0.2", 00:24:12.575 "adrfam": "ipv4", 00:24:12.575 "trsvcid": "4420", 00:24:12.575 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:12.575 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:12.575 "hdgst": false, 00:24:12.575 "ddgst": false 00:24:12.575 }, 00:24:12.575 "method": "bdev_nvme_attach_controller" 00:24:12.575 },{ 00:24:12.575 "params": { 00:24:12.575 "name": "Nvme9", 00:24:12.575 "trtype": "tcp", 00:24:12.575 "traddr": "10.0.0.2", 00:24:12.575 "adrfam": "ipv4", 00:24:12.575 "trsvcid": "4420", 00:24:12.575 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:12.575 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:12.575 "hdgst": false, 00:24:12.575 "ddgst": false 00:24:12.575 }, 00:24:12.575 "method": "bdev_nvme_attach_controller" 00:24:12.575 },{ 00:24:12.575 "params": { 00:24:12.575 "name": "Nvme10", 00:24:12.575 "trtype": "tcp", 00:24:12.575 "traddr": "10.0.0.2", 00:24:12.575 "adrfam": "ipv4", 00:24:12.575 "trsvcid": "4420", 00:24:12.576 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:12.576 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:12.576 "hdgst": false, 00:24:12.576 "ddgst": false 00:24:12.576 }, 00:24:12.576 "method": "bdev_nvme_attach_controller" 00:24:12.576 }' 00:24:12.576 [2024-07-24 20:18:16.339095] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:24:12.576 [2024-07-24 20:18:16.339173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2104968 ] 00:24:12.834 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.834 [2024-07-24 20:18:16.414202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.834 [2024-07-24 20:18:16.552736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.734 Running I/O for 10 seconds... 00:24:14.734 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:14.734 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:24:14.734 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:14.734 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.734 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:14.734 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.734 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:14.734 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:14.734 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:14.734 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:14.734 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:24:14.734 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:24:14.734 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:14.734 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:14.734 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:14.734 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.734 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:14.734 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:14.734 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.734 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:14.734 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:14.734 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:14.992 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:14.992 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:14.992 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:14.992 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:14.992 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.992 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:14.992 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.992 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:24:14.992 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:24:14.992 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:15.260 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:15.260 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:15.260 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:15.260 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:15.260 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.260 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:15.260 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.260 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:24:15.260 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:24:15.260 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:24:15.260 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:24:15.260 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:24:15.260 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2104789 00:24:15.260 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2104789 ']' 00:24:15.260 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2104789 00:24:15.260 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:24:15.260 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:15.260 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2104789 00:24:15.260 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:15.260 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:15.260 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2104789' 00:24:15.260 killing process with pid 2104789 00:24:15.260 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 2104789 00:24:15.261 20:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 2104789 00:24:15.261 [2024-07-24 20:18:18.985800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.985884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.985905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.985923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.985941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.985958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.985974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.985990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.986941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x585dc0 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.989706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.989752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.989772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.989790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.989807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.989823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.989840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.989856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.989873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.989889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.989905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.989921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.989938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.989962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.989981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.989997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.990014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.990031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.261 [2024-07-24 20:18:18.990048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.990785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587f20 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.993827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.993863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.993882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.993898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.993922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.993940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.993957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.993973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.993990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.262 [2024-07-24 20:18:18.994506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.994522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.994539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.994555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.994571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.994587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.994603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.994620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.994636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.994667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.994684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.994700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.994717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.994733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.994749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.994766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.994782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.994798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.994815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.994831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.994847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.994874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.994891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.994907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.994923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586280 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.998655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.998706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.998727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.998744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.998761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.998777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.998794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.998812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.998828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.998846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.998863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.998880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.998897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.998914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.998930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.998947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.998964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.998980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.998997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-24 20:18:18.999245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with tid:0 cdw10:00000000 cdw11:00000000 00:24:15.263 he state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.263 [2024-07-24 20:18:18.999301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.263 [2024-07-24 20:18:18.999318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.263 [2024-07-24 20:18:18.999336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.263 [2024-07-24 20:18:18.999354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-24 20:18:18.999371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.263 he state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with t[2024-07-24 20:18:18.999391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(5) to be set 00:24:15.263 id:0 cdw10:00000000 cdw11:00000000 00:24:15.263 [2024-07-24 20:18:18.999410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with t[2024-07-24 20:18:18.999412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:24:15.263 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.263 [2024-07-24 20:18:18.999447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fc00 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.263 [2024-07-24 20:18:18.999568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:18.999584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:18.999601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:18.999599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.264 [2024-07-24 20:18:18.999617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:18.999629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.264 [2024-07-24 20:18:18.999635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:18.999650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-24 20:18:18.999653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with tid:0 cdw10:00000000 cdw11:00000000 00:24:15.264 he state(5) to be set 00:24:15.264 [2024-07-24 20:18:18.999671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-24 20:18:18.999671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.264 he state(5) to be set 00:24:15.264 [2024-07-24 20:18:18.999693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with t[2024-07-24 20:18:18.999693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nshe state(5) to be set 00:24:15.264 id:0 cdw10:00000000 cdw11:00000000 00:24:15.264 [2024-07-24 20:18:18.999711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with t[2024-07-24 20:18:18.999714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:24:15.264 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.264 [2024-07-24 20:18:18.999731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:18.999734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.264 [2024-07-24 20:18:18.999749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:18.999753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.264 [2024-07-24 20:18:18.999767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:18.999793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01200 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:18.999800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:18.999817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586740 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:18.999855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.264 [2024-07-24 20:18:18.999882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.264 [2024-07-24 20:18:18.999901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.264 [2024-07-24 20:18:18.999919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.264 [2024-07-24 20:18:18.999937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.264 [2024-07-24 20:18:18.999954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.264 [2024-07-24 20:18:18.999972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.264 [2024-07-24 20:18:18.999989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.264 [2024-07-24 20:18:19.000006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213fec0 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.000083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.264 [2024-07-24 20:18:19.000112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.264 [2024-07-24 20:18:19.000133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.264 [2024-07-24 20:18:19.000151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.264 [2024-07-24 20:18:19.000169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.264 [2024-07-24 20:18:19.000187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.264 [2024-07-24 20:18:19.000207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.264 [2024-07-24 20:18:19.000225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.264 [2024-07-24 20:18:19.000242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22df950 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.002601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.002658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.002690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.002719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.002757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.002789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.002817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.002845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.002873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.002902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.002931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.002965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.002996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.264 [2024-07-24 20:18:19.003889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.265 [2024-07-24 20:18:19.003918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.265 [2024-07-24 20:18:19.003949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.265 [2024-07-24 20:18:19.003978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.265 [2024-07-24 20:18:19.004009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.265 [2024-07-24 20:18:19.004037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.265 [2024-07-24 20:18:19.004067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.265 [2024-07-24 20:18:19.004098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.265 [2024-07-24 20:18:19.004126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.265 [2024-07-24 20:18:19.004157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.265 [2024-07-24 20:18:19.004187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.265 [2024-07-24 20:18:19.004217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.265 [2024-07-24 20:18:19.004248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.265 [2024-07-24 20:18:19.004276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.265 [2024-07-24 20:18:19.004307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.265 [2024-07-24 20:18:19.004335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.265 [2024-07-24 20:18:19.004365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.265 [2024-07-24 20:18:19.004402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.265 [2024-07-24 20:18:19.004444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.265 [2024-07-24 20:18:19.004478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.265 [2024-07-24 20:18:19.004505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.265 [2024-07-24 20:18:19.004536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x586c20 is same with the state(5) to be set 00:24:15.265 [2024-07-24 20:18:19.005836] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:15.265 [2024-07-24 20:18:19.005943] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:15.265 [2024-07-24 20:18:19.007302] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:15.265 [2024-07-24 20:18:19.009051] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:15.265 [2024-07-24 20:18:19.009549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x216fc00 (9): Bad file descriptor 00:24:15.265 [2024-07-24 20:18:19.009707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.265 [2024-07-24 20:18:19.009740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.265 [2024-07-24 20:18:19.009771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.265 [2024-07-24 20:18:19.009790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.265 [2024-07-24 20:18:19.009809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.265 [2024-07-24 20:18:19.009827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.265 [2024-07-24 20:18:19.009845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.265 [2024-07-24 20:18:19.009864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.265 [2024-07-24 20:18:19.009882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21431c0 is same with the state(5) to be set 00:24:15.265 [2024-07-24 20:18:19.009920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d01200 (9): Bad file descriptor 00:24:15.265 [2024-07-24 20:18:19.009961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213fec0 (9): Bad file descriptor 00:24:15.265 [2024-07-24 20:18:19.010001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22df950 (9): Bad file descriptor 00:24:15.265 [2024-07-24 20:18:19.010162] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:15.265 [2024-07-24 20:18:19.011503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.265 [2024-07-24 20:18:19.011538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.265 [2024-07-24 20:18:19.011573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.265 [2024-07-24 20:18:19.011595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.265 [2024-07-24 20:18:19.011619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.265 [2024-07-24 20:18:19.011647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.265 [2024-07-24 20:18:19.011669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.265 [2024-07-24 20:18:19.011688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.265 [2024-07-24 20:18:19.011710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.265 [2024-07-24 20:18:19.011729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.265 [2024-07-24 20:18:19.011749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.265 [2024-07-24 20:18:19.011769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.265 [2024-07-24 20:18:19.011789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.265 [2024-07-24 20:18:19.011808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.265 [2024-07-24 20:18:19.011829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.265 [2024-07-24 20:18:19.011849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.265 [2024-07-24 20:18:19.011870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.265 [2024-07-24 20:18:19.011889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.265 [2024-07-24 20:18:19.011910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.265 [2024-07-24 20:18:19.011928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.265 [2024-07-24 20:18:19.011949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.265 [2024-07-24 20:18:19.011968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.265 [2024-07-24 20:18:19.011989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.265 [2024-07-24 20:18:19.012007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.265 [2024-07-24 20:18:19.012028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.012047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.012068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.012086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.012107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.012126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.012152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.012172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.012193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.012212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.012233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.012252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.012273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.012292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.012313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.012332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.012352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.012371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.012392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.012411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.012444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.012467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.012488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.012508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.012529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.012547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.012568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.012586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.012607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.012626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.012647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.012670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.012692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.012711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.012732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.012750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.012772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.012790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.012811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.012830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.012850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.012869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.012890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.012909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.012930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.012948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.012969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.012987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.013008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.013026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.013047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.013065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.013086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.013105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.013126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.013144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.013171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.013190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.013211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.013230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.013251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.013269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.013290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.013308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.013328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.013347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.013368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.013386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.013407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.013426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.013461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.013481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.013502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.013527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.013549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.013569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.013590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.266 [2024-07-24 20:18:19.013609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.266 [2024-07-24 20:18:19.013630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.013649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.013671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.013695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.013717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.013736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.013757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.013777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.013798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.013817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.013838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.013857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.013878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.013897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.013917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.013936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.013957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.013976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.013997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.014016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.014036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.014055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.014076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.014095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.014116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.014135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.014154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c3820 is same with the state(5) to be set 00:24:15.267 [2024-07-24 20:18:19.014256] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22c3820 was disconnected and freed. reset controller. 00:24:15.267 [2024-07-24 20:18:19.017037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:15.267 [2024-07-24 20:18:19.017135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2142cb0 (9): Bad file descriptor 00:24:15.267 [2024-07-24 20:18:19.018336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.267 [2024-07-24 20:18:19.018378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2142cb0 with addr=10.0.0.2, port=4420 00:24:15.267 [2024-07-24 20:18:19.018403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2142cb0 is same with the state(5) to be set 00:24:15.267 [2024-07-24 20:18:19.018738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2142cb0 (9): Bad file descriptor 00:24:15.267 [2024-07-24 20:18:19.019061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:15.267 [2024-07-24 20:18:19.019091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:15.267 [2024-07-24 20:18:19.019114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:15.267 [2024-07-24 20:18:19.019445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.267 [2024-07-24 20:18:19.019822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21431c0 (9): Bad file descriptor 00:24:15.267 [2024-07-24 20:18:19.020246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.020279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.020309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.020331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.020353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.020372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.020394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.020413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.020454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.020478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.020500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.020521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.020543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.020561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.020582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.020600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.020628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.020649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.020670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.020691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.020714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.020732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.020754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.020772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.020793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.020813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.020834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.020852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.020872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.020890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.020924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.020946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.020969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.020988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.021009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.021027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.021051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.021069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.021090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.021108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.267 [2024-07-24 20:18:19.021131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.267 [2024-07-24 20:18:19.021149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.021176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.021195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.021217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.021236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.021257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.021275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.021297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.021315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.021336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.021354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.021377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.021396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.021416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.021443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.021465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.021486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.021506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.021524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.021544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.021562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.021584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.021603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.021623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.021643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.021663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.021687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.021709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.021728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.021749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.021769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.021790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.021809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.021829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.021848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.021869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.021888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.021908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.021926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.021947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.021965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.021985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.022004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.022025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.022043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.022063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.022081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.022102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.022120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.022141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.022160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.022191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.022210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.022231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.022249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.022270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.022287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.022308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.022326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.022346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.022364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.022384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.022402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.022423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.022451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.022473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.022492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.022513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.022532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.022553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.022571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.022591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.022610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.022631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.022650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.022670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.022693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.022715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.268 [2024-07-24 20:18:19.022734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.268 [2024-07-24 20:18:19.022754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.022779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.022800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.022819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.022840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.022858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.022878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.022896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.022915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x221db60 is same with the state(5) to be set 00:24:15.269 [2024-07-24 20:18:19.024612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.024643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.024671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.024693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.024714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.024733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.024753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.024771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.024792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.024810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.024831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.024849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.024869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.024893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.024915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.024934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.024954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.024972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.024993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.025011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.025031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.025050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.025070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.025089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.025111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.025130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.025150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.025168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.025189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.025213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.025235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.025256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.025277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.025297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.025317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.025336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.025357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.025375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.025402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.025422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.025457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.025478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.025500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.025518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.025540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.025558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.025579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.025598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.025619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.025638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.025659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.025679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.025700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.025719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.025741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.025761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.269 [2024-07-24 20:18:19.025782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.269 [2024-07-24 20:18:19.025801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.270 [2024-07-24 20:18:19.025824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.270 [2024-07-24 20:18:19.025844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.270 [2024-07-24 20:18:19.025865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.270 [2024-07-24 20:18:19.025892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.270 [2024-07-24 20:18:19.025915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.270 [2024-07-24 20:18:19.025939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.025962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.025981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.026002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.026021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.026042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.026061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.026082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.026100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.026121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.026140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.026161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.026180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.026201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.026220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.026241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.026260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.026280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.026299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.026319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.026338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.026359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.026377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.026398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.026417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.026457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.026478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.026500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.026519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.026540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.026565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.026587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.026606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.026626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.026645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.026666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.026686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.026662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.271 [2024-07-24 20:18:19.026707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.026719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.271 [2024-07-24 20:18:19.026726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.026739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.271 [2024-07-24 20:18:19.026748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.026757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.271 [2024-07-24 20:18:19.026768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.026774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.271 [2024-07-24 20:18:19.026790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.026793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.271 [2024-07-24 20:18:19.026809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 20:18:19.026812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 he state(5) to be set 00:24:15.271 [2024-07-24 20:18:19.026831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with t[2024-07-24 20:18:19.026834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:12he state(5) to be set 00:24:15.271 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.026858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with t[2024-07-24 20:18:19.026860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:24:15.271 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.026878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.271 [2024-07-24 20:18:19.026884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.026895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.271 [2024-07-24 20:18:19.026903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.026911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.271 [2024-07-24 20:18:19.026926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.026929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.271 [2024-07-24 20:18:19.026945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 20:18:19.026947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 he state(5) to be set 00:24:15.271 [2024-07-24 20:18:19.026966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.271 [2024-07-24 20:18:19.026969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.026982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.271 [2024-07-24 20:18:19.026989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.027000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.271 [2024-07-24 20:18:19.027010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.027017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.271 [2024-07-24 20:18:19.027030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.027042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.271 [2024-07-24 20:18:19.027052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.027060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.271 [2024-07-24 20:18:19.027072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.027077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.271 [2024-07-24 20:18:19.027094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with t[2024-07-24 20:18:19.027094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:12he state(5) to be set 00:24:15.271 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.271 [2024-07-24 20:18:19.027118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with t[2024-07-24 20:18:19.027120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:24:15.271 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.271 [2024-07-24 20:18:19.027138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.272 [2024-07-24 20:18:19.027155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.272 [2024-07-24 20:18:19.027172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:12[2024-07-24 20:18:19.027189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.272 he state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.272 [2024-07-24 20:18:19.027226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.272 [2024-07-24 20:18:19.027243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.272 [2024-07-24 20:18:19.027277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.272 [2024-07-24 20:18:19.027294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.272 [2024-07-24 20:18:19.027322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x221ee80 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.027832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bd610 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.029013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.272 [2024-07-24 20:18:19.029049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.272 [2024-07-24 20:18:19.029078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.272 [2024-07-24 20:18:19.029099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.272 [2024-07-24 20:18:19.029121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.272 [2024-07-24 20:18:19.029139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.272 [2024-07-24 20:18:19.029160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.272 [2024-07-24 20:18:19.029178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.272 [2024-07-24 20:18:19.029199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.272 [2024-07-24 20:18:19.029218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.272 [2024-07-24 20:18:19.029238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.272 [2024-07-24 20:18:19.029257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.272 [2024-07-24 20:18:19.029277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.272 [2024-07-24 20:18:19.029296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.272 [2024-07-24 20:18:19.029316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.272 [2024-07-24 20:18:19.029335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.272 [2024-07-24 20:18:19.029355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.272 [2024-07-24 20:18:19.029374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.272 [2024-07-24 20:18:19.029395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.272 [2024-07-24 20:18:19.029413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.272 [2024-07-24 20:18:19.029442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.272 [2024-07-24 20:18:19.029463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.272 [2024-07-24 20:18:19.029484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.272 [2024-07-24 20:18:19.029502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.272 [2024-07-24 20:18:19.029523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.272 [2024-07-24 20:18:19.029542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.272 [2024-07-24 20:18:19.029570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.272 [2024-07-24 20:18:19.029594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.272 [2024-07-24 20:18:19.029602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with t[2024-07-24 20:18:19.029617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:12he state(5) to be set 00:24:15.272 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.272 [2024-07-24 20:18:19.029642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.272 [2024-07-24 20:18:19.029647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.272 [2024-07-24 20:18:19.029663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.272 [2024-07-24 20:18:19.029668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.029685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.273 [2024-07-24 20:18:19.029694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.029706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.273 [2024-07-24 20:18:19.029712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.029725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.273 [2024-07-24 20:18:19.029729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.029746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with t[2024-07-24 20:18:19.029746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:12he state(5) to be set 00:24:15.273 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.273 [2024-07-24 20:18:19.029767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with t[2024-07-24 20:18:19.029769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:24:15.273 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.273 [2024-07-24 20:18:19.029786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.029792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.273 [2024-07-24 20:18:19.029804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.029810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.273 [2024-07-24 20:18:19.029821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.029832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.273 [2024-07-24 20:18:19.029838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.029851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.273 [2024-07-24 20:18:19.029855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.029871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.273 [2024-07-24 20:18:19.029881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.029890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.273 [2024-07-24 20:18:19.029899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.029911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.273 [2024-07-24 20:18:19.029917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.029930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.273 [2024-07-24 20:18:19.029934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.029952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with t[2024-07-24 20:18:19.029951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:12he state(5) to be set 00:24:15.273 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.273 [2024-07-24 20:18:19.029972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with t[2024-07-24 20:18:19.029973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:24:15.273 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.273 [2024-07-24 20:18:19.029991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.029995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.273 [2024-07-24 20:18:19.030008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.030014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.273 [2024-07-24 20:18:19.030026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.030035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.273 [2024-07-24 20:18:19.030043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.030054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.273 [2024-07-24 20:18:19.030061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.030076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:12[2024-07-24 20:18:19.030079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.273 he state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.030097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 20:18:19.030098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.273 he state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.030117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with t[2024-07-24 20:18:19.030120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:12he state(5) to be set 00:24:15.273 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.273 [2024-07-24 20:18:19.030141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with t[2024-07-24 20:18:19.030142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:24:15.273 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.273 [2024-07-24 20:18:19.030161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.030166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.273 [2024-07-24 20:18:19.030179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.030186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.273 [2024-07-24 20:18:19.030196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.030207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.273 [2024-07-24 20:18:19.030215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.030226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.273 [2024-07-24 20:18:19.030233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.030251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.030255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.273 [2024-07-24 20:18:19.030267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.030275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.273 [2024-07-24 20:18:19.030284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.030297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.273 [2024-07-24 20:18:19.030302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.030316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 20:18:19.030319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.273 he state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.030337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.030340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.273 [2024-07-24 20:18:19.030354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.030358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.273 [2024-07-24 20:18:19.030371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.030379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.273 [2024-07-24 20:18:19.030391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.030398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.273 [2024-07-24 20:18:19.030409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.030420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.273 [2024-07-24 20:18:19.030426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.030447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.273 [2024-07-24 20:18:19.030455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.030470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:12[2024-07-24 20:18:19.030473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.273 he state(5) to be set 00:24:15.273 [2024-07-24 20:18:19.030493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.030497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.274 [2024-07-24 20:18:19.030514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:12[2024-07-24 20:18:19.030514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 he state(5) to be set 00:24:15.274 [2024-07-24 20:18:19.030536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with t[2024-07-24 20:18:19.030536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:24:15.274 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.030562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.274 [2024-07-24 20:18:19.030566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.030579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.274 [2024-07-24 20:18:19.030584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.030596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.274 [2024-07-24 20:18:19.030606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.030613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.274 [2024-07-24 20:18:19.030625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.030630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.274 [2024-07-24 20:18:19.030646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with t[2024-07-24 20:18:19.030646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:12he state(5) to be set 00:24:15.274 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.030671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with t[2024-07-24 20:18:19.030672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:24:15.274 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.030690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.274 [2024-07-24 20:18:19.030696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.030707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.274 [2024-07-24 20:18:19.030715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.030724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.274 [2024-07-24 20:18:19.030737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.030742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.274 [2024-07-24 20:18:19.030756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.030760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.274 [2024-07-24 20:18:19.030777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with t[2024-07-24 20:18:19.030777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:12he state(5) to be set 00:24:15.274 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.030796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bdad0 is same with the state(5) to be set 00:24:15.274 [2024-07-24 20:18:19.030798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.030821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.030839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.030859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.030877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.030898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.030916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.030937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.030955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.030976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.030994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.031019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.031038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.031059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.031078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.031098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.031116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.031136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.031155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.031175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.031193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.031214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.031232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.031253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.031271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.031291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.031309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.031330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.031348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.031368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.031386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.031407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.031425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.031453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.031472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.031495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.031519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.031540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.031574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.031594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.031612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.031632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.031649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.031669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.274 [2024-07-24 20:18:19.031687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.274 [2024-07-24 20:18:19.031705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2220330 is same with the state(5) to be set 00:24:15.274 [2024-07-24 20:18:19.032235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.274 [2024-07-24 20:18:19.032281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.274 [2024-07-24 20:18:19.032315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.274 [2024-07-24 20:18:19.032343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.032375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.032418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.032464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.032495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.032526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.032564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.032592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.032623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.032652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.032683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.032711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.032739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.032767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.032806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.032837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.032865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.032895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.032923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.032953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.032996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.275 [2024-07-24 20:18:19.033894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.275 [2024-07-24 20:18:19.033923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.275 [2024-07-24 20:18:19.033952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.033967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.275 [2024-07-24 20:18:19.033979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with t[2024-07-24 20:18:19.033988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 he state(5) to be set 00:24:15.275 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.275 [2024-07-24 20:18:19.034008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.275 [2024-07-24 20:18:19.034011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.034028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.275 [2024-07-24 20:18:19.034039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with t[2024-07-24 20:18:19.034046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:24:15.275 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.275 [2024-07-24 20:18:19.034070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.275 [2024-07-24 20:18:19.034070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.034088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.275 [2024-07-24 20:18:19.034098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.034119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.275 [2024-07-24 20:18:19.034127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.034143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.275 [2024-07-24 20:18:19.034155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.275 [2024-07-24 20:18:19.034165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.275 [2024-07-24 20:18:19.034183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.275 [2024-07-24 20:18:19.034183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with the state(5) to be set 00:24:15.276 [2024-07-24 20:18:19.034203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.276 [2024-07-24 20:18:19.034212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5875a0 is same with t[2024-07-24 20:18:19.034220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:24:15.276 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.276 [2024-07-24 20:18:19.034242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.276 [2024-07-24 20:18:19.034280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.276 [2024-07-24 20:18:19.034316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.276 [2024-07-24 20:18:19.034352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.276 [2024-07-24 20:18:19.034390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.276 [2024-07-24 20:18:19.034438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.276 [2024-07-24 20:18:19.034492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.276 [2024-07-24 20:18:19.034528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.276 [2024-07-24 20:18:19.034560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.276 [2024-07-24 20:18:19.034587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.276 [2024-07-24 20:18:19.034609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.276 [2024-07-24 20:18:19.034628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.276 [2024-07-24 20:18:19.034649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.276 [2024-07-24 20:18:19.034668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.276 [2024-07-24 20:18:19.034688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.276 [2024-07-24 20:18:19.034706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.276 [2024-07-24 20:18:19.034738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.276 [2024-07-24 20:18:19.034762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.276 [2024-07-24 20:18:19.034784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.276 [2024-07-24 20:18:19.034803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.276 [2024-07-24 20:18:19.034823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.276 [2024-07-24 20:18:19.034841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.276 [2024-07-24 20:18:19.034877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.276 [2024-07-24 20:18:19.034895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.276 [2024-07-24 20:18:19.034914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.276 [2024-07-24 20:18:19.034932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.276 [2024-07-24 20:18:19.034951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.276 [2024-07-24 20:18:19.034968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.276 [2024-07-24 20:18:19.034988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.276 [2024-07-24 20:18:19.035005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.276 [2024-07-24 20:18:19.035024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.276 [2024-07-24 20:18:19.035042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.276 [2024-07-24 20:18:19.035062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.276 [2024-07-24 20:18:19.035079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.276 [2024-07-24 20:18:19.035307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587a60 is same with the state(5) to be set 00:24:15.541 [2024-07-24 20:18:19.058824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.541 [2024-07-24 20:18:19.058909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.541 [2024-07-24 20:18:19.058933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.541 [2024-07-24 20:18:19.058952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.541 [2024-07-24 20:18:19.058973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.541 [2024-07-24 20:18:19.058992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.541 [2024-07-24 20:18:19.059012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.541 [2024-07-24 20:18:19.059052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.541 [2024-07-24 20:18:19.059073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.541 [2024-07-24 20:18:19.059091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.541 [2024-07-24 20:18:19.059111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.541 [2024-07-24 20:18:19.059129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.541 [2024-07-24 20:18:19.059148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.541 [2024-07-24 20:18:19.059166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.541 [2024-07-24 20:18:19.059186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.541 [2024-07-24 20:18:19.059204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.541 [2024-07-24 20:18:19.059223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.541 [2024-07-24 20:18:19.059241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.541 [2024-07-24 20:18:19.059260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.541 [2024-07-24 20:18:19.059278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.541 [2024-07-24 20:18:19.059298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.541 [2024-07-24 20:18:19.059315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.541 [2024-07-24 20:18:19.059335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.059353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.059373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.059391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.059411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.059449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.059473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.059492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.059511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.059529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.059554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.059578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.059597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.059615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.059635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.059653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.059672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.059691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.059711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.059729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.059749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.059766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.059786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.059803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.059823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.059840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.059861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.059878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.059898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.059916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.059936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.059953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.059973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.059991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.060010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.060032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.060053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.060070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.060091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.060108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.060128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.060146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.060166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.060184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.060204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.060221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.060241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.060259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.060279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.060298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.060318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.060335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.060355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.060373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.060393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.542 [2024-07-24 20:18:19.060410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.060437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a3a50 is same with the state(5) to be set 00:24:15.542 [2024-07-24 20:18:19.062977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.542 [2024-07-24 20:18:19.063025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:15.542 [2024-07-24 20:18:19.063048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:15.542 [2024-07-24 20:18:19.063070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:15.542 [2024-07-24 20:18:19.063299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.542 [2024-07-24 20:18:19.063329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.063349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.542 [2024-07-24 20:18:19.063366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.063384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.542 [2024-07-24 20:18:19.063400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.063418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.542 [2024-07-24 20:18:19.063445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.063463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2144460 is same with the state(5) to be set 00:24:15.542 [2024-07-24 20:18:19.063546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.542 [2024-07-24 20:18:19.063583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.063602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.542 [2024-07-24 20:18:19.063619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.063654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.542 [2024-07-24 20:18:19.063671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.063690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.542 [2024-07-24 20:18:19.063707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.542 [2024-07-24 20:18:19.063724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf90 is same with the state(5) to be set 00:24:15.542 [2024-07-24 20:18:19.063797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.542 [2024-07-24 20:18:19.063832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.063881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.543 [2024-07-24 20:18:19.063911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.063942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.543 [2024-07-24 20:18:19.063983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.064015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.543 [2024-07-24 20:18:19.064058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.064097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22dd790 is same with the state(5) to be set 00:24:15.543 [2024-07-24 20:18:19.064191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.543 [2024-07-24 20:18:19.064229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.064262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.543 [2024-07-24 20:18:19.064292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.064323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.543 [2024-07-24 20:18:19.064353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.064372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.543 [2024-07-24 20:18:19.064389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.064405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c15610 is same with the state(5) to be set 00:24:15.543 [2024-07-24 20:18:19.065232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.543 [2024-07-24 20:18:19.065277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d01200 with addr=10.0.0.2, port=4420 00:24:15.543 [2024-07-24 20:18:19.065305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01200 is same with the state(5) to be set 00:24:15.543 [2024-07-24 20:18:19.065516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.543 [2024-07-24 20:18:19.065550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22df950 with addr=10.0.0.2, port=4420 00:24:15.543 [2024-07-24 20:18:19.065570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22df950 is same with the state(5) to be set 00:24:15.543 [2024-07-24 20:18:19.065751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.543 [2024-07-24 20:18:19.065783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213fec0 with addr=10.0.0.2, port=4420 00:24:15.543 [2024-07-24 20:18:19.065803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213fec0 is same with the state(5) to be set 00:24:15.543 [2024-07-24 20:18:19.065999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.543 [2024-07-24 20:18:19.066030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x216fc00 with addr=10.0.0.2, port=4420 00:24:15.543 [2024-07-24 20:18:19.066050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fc00 is same with the state(5) to be set 00:24:15.543 [2024-07-24 20:18:19.067180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.067211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.067242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.067263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.067286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.067311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.067333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.067351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.067371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.067389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.067409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.067426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.067457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.067486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.067506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.067524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.067544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.067561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.067582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.067600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.067619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.067637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.067657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.067675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.067695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.067712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.067732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.067749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.067769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.067787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.067812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.067830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.067850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.067867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.067887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.067908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.067929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.067947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.067967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.067986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.068005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.068023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.068043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.068061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.068081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.068099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.068119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.068137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.068157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.068175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.068196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.543 [2024-07-24 20:18:19.068214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.543 [2024-07-24 20:18:19.068234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.068252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.068272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.068295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.068317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.068335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.068355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.068373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.068393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.068411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.068438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.068459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.068490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.068509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.068529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.068554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.068575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.068593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.068613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.068633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.068653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.068671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.068692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.068710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.068731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.068749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.068769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.068787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.068812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.068831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.068852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.068869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.068889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.068907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.068928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.068946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.068966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.068984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.069004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.069022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.069044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.069062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.069083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.069101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.069120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.069138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.069159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.069177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.069198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.069216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.069237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.069255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.069275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.069299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.069320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.069339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.069359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.069377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.069397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.069415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.069447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.069468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.069490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.069508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.069528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.069546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.069575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.069593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.069613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.069631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.069651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.069670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.069690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.069708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.069728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.544 [2024-07-24 20:18:19.069746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.544 [2024-07-24 20:18:19.069765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2410 is same with the state(5) to be set 00:24:15.544 [2024-07-24 20:18:19.073887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:15.544 [2024-07-24 20:18:19.073944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:15.545 [2024-07-24 20:18:19.074017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d01200 (9): Bad file descriptor 00:24:15.545 [2024-07-24 20:18:19.074050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22df950 (9): Bad file descriptor 00:24:15.545 [2024-07-24 20:18:19.074076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213fec0 (9): Bad file descriptor 00:24:15.545 [2024-07-24 20:18:19.074098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x216fc00 (9): Bad file descriptor 00:24:15.545 [2024-07-24 20:18:19.074172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2144460 (9): Bad file descriptor 00:24:15.545 [2024-07-24 20:18:19.074217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22daf90 (9): Bad file descriptor 00:24:15.545 [2024-07-24 20:18:19.074260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22dd790 (9): Bad file descriptor 00:24:15.545 [2024-07-24 20:18:19.074304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c15610 (9): Bad file descriptor 00:24:15.545 [2024-07-24 20:18:19.074347] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.545 [2024-07-24 20:18:19.074374] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.545 [2024-07-24 20:18:19.074399] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.545 [2024-07-24 20:18:19.074421] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.545 [2024-07-24 20:18:19.074969] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22c6ea0 was disconnected and freed. reset controller. 00:24:15.545 [2024-07-24 20:18:19.075280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.545 [2024-07-24 20:18:19.075319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2142cb0 with addr=10.0.0.2, port=4420 00:24:15.545 [2024-07-24 20:18:19.075341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2142cb0 is same with the state(5) to be set 00:24:15.545 [2024-07-24 20:18:19.075551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.545 [2024-07-24 20:18:19.075586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21431c0 with addr=10.0.0.2, port=4420 00:24:15.545 [2024-07-24 20:18:19.075607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21431c0 is same with the state(5) to be set 00:24:15.545 [2024-07-24 20:18:19.075628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.545 [2024-07-24 20:18:19.075645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.545 [2024-07-24 20:18:19.075667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.545 [2024-07-24 20:18:19.075694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:15.545 [2024-07-24 20:18:19.075712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:15.545 [2024-07-24 20:18:19.075733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:15.545 [2024-07-24 20:18:19.075756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:15.545 [2024-07-24 20:18:19.075774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:15.545 [2024-07-24 20:18:19.075792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:15.545 [2024-07-24 20:18:19.075820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:15.545 [2024-07-24 20:18:19.075840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:15.545 [2024-07-24 20:18:19.075857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:15.545 [2024-07-24 20:18:19.076342] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:15.545 [2024-07-24 20:18:19.076460] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:15.545 [2024-07-24 20:18:19.076977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.545 [2024-07-24 20:18:19.077008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.545 [2024-07-24 20:18:19.077025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.545 [2024-07-24 20:18:19.077040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.545 [2024-07-24 20:18:19.077056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:15.545 [2024-07-24 20:18:19.077102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2142cb0 (9): Bad file descriptor 00:24:15.545 [2024-07-24 20:18:19.077132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21431c0 (9): Bad file descriptor 00:24:15.545 [2024-07-24 20:18:19.077268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.545 [2024-07-24 20:18:19.077302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.545 [2024-07-24 20:18:19.077337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.545 [2024-07-24 20:18:19.077358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.545 [2024-07-24 20:18:19.077381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.545 [2024-07-24 20:18:19.077399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.545 [2024-07-24 20:18:19.077421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.545 [2024-07-24 20:18:19.077452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.545 [2024-07-24 20:18:19.077474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.545 [2024-07-24 20:18:19.077494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.545 [2024-07-24 20:18:19.077515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.545 [2024-07-24 20:18:19.077534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.545 [2024-07-24 20:18:19.077555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.545 [2024-07-24 20:18:19.077573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.545 [2024-07-24 20:18:19.077594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.545 [2024-07-24 20:18:19.077612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.545 [2024-07-24 20:18:19.077640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.545 [2024-07-24 20:18:19.077660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.545 [2024-07-24 20:18:19.077681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.545 [2024-07-24 20:18:19.077700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.545 [2024-07-24 20:18:19.077721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.545 [2024-07-24 20:18:19.077739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.545 [2024-07-24 20:18:19.077762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.545 [2024-07-24 20:18:19.077780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.545 [2024-07-24 20:18:19.077801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.545 [2024-07-24 20:18:19.077820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.545 [2024-07-24 20:18:19.077842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.545 [2024-07-24 20:18:19.077860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.545 [2024-07-24 20:18:19.077881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.545 [2024-07-24 20:18:19.077899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.545 [2024-07-24 20:18:19.077920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.545 [2024-07-24 20:18:19.077938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.545 [2024-07-24 20:18:19.077959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.546 [2024-07-24 20:18:19.077977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.546 [2024-07-24 20:18:19.077998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.546 [2024-07-24 20:18:19.078016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.546 [2024-07-24 20:18:19.078037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.546 [2024-07-24 20:18:19.078056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.546 [2024-07-24 20:18:19.078076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.546 [2024-07-24 20:18:19.078095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.546 [2024-07-24 20:18:19.078116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.546 [2024-07-24 20:18:19.078139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.546 [2024-07-24 20:18:19.078161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.546 [2024-07-24 20:18:19.078179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.546 [2024-07-24 20:18:19.078200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.546 [2024-07-24 20:18:19.078218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.546 [2024-07-24 20:18:19.078239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.546 [2024-07-24 20:18:19.078257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.546 [2024-07-24 20:18:19.078278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.546 [2024-07-24 20:18:19.078296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.546 [2024-07-24 20:18:19.078317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.546 [2024-07-24 20:18:19.078336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.546 [2024-07-24 20:18:19.078356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.546 [2024-07-24 20:18:19.078374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.546 [2024-07-24 20:18:19.078396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.546 [2024-07-24 20:18:19.078415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.546 [2024-07-24 20:18:19.078443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.546 [2024-07-24 20:18:19.078464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.546 [2024-07-24 20:18:19.078485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.546 [2024-07-24 20:18:19.078504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.546 [2024-07-24 20:18:19.078525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.546 [2024-07-24 20:18:19.078543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.546 [2024-07-24 20:18:19.078564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.546 [2024-07-24 20:18:19.078582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.546 [2024-07-24 20:18:19.078603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.546 [2024-07-24 20:18:19.078621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.546 [2024-07-24 20:18:19.078648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.546 [2024-07-24 20:18:19.078668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.546 [2024-07-24 20:18:19.078689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.546 [2024-07-24 20:18:19.078707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.546 [2024-07-24 20:18:19.078728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.546 [2024-07-24 20:18:19.078746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.546 [2024-07-24 20:18:19.078767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.546 [2024-07-24 20:18:19.078786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.078807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.078825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.078846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.078864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.078888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.078908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.078929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.078948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.078969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.078988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.079009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.079028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.079050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.079069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.079091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.079110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.079131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.079155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.079178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.079197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.079219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.079238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.079259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.079278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.079300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.079319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.079340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.079359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.079380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.079399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.079420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.079451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.079474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.079493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.079514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.079533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.079554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.079573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.079594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.079613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.079635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.079653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.079679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.079699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.079721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.079750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.079772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.079791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.079812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.079831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.079852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.079871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.079893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.547 [2024-07-24 20:18:19.079911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.547 [2024-07-24 20:18:19.079931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5b00 is same with the state(5) to be set 00:24:15.547 [2024-07-24 20:18:19.080034] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22c5b00 was disconnected and freed. reset controller. 00:24:15.547 [2024-07-24 20:18:19.080321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.548 [2024-07-24 20:18:19.080358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22dd790 with addr=10.0.0.2, port=4420 00:24:15.548 [2024-07-24 20:18:19.080380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22dd790 is same with the state(5) to be set 00:24:15.548 [2024-07-24 20:18:19.080400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:15.548 [2024-07-24 20:18:19.080418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:15.548 [2024-07-24 20:18:19.080447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:15.548 [2024-07-24 20:18:19.080476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:15.548 [2024-07-24 20:18:19.080496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:15.548 [2024-07-24 20:18:19.096196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:15.548 [2024-07-24 20:18:19.098063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.548 [2024-07-24 20:18:19.098099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.548 [2024-07-24 20:18:19.098124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:15.548 [2024-07-24 20:18:19.098205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22dd790 (9): Bad file descriptor 00:24:15.548 [2024-07-24 20:18:19.098341] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.548 [2024-07-24 20:18:19.098856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.548 [2024-07-24 20:18:19.098902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22daf90 with addr=10.0.0.2, port=4420 00:24:15.548 [2024-07-24 20:18:19.098937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf90 is same with the state(5) to be set 00:24:15.548 [2024-07-24 20:18:19.098959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:15.548 [2024-07-24 20:18:19.098977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:15.548 [2024-07-24 20:18:19.098995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:15.548 [2024-07-24 20:18:19.099119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.548 [2024-07-24 20:18:19.099151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.548 [2024-07-24 20:18:19.099190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.548 [2024-07-24 20:18:19.099211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.548 [2024-07-24 20:18:19.099235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.548 [2024-07-24 20:18:19.099254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.548 [2024-07-24 20:18:19.099276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.548 [2024-07-24 20:18:19.099295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.548 [2024-07-24 20:18:19.099317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.548 [2024-07-24 20:18:19.099336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.548 [2024-07-24 20:18:19.099357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.548 [2024-07-24 20:18:19.099375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.548 [2024-07-24 20:18:19.099397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.548 [2024-07-24 20:18:19.099416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.548 [2024-07-24 20:18:19.099449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.548 [2024-07-24 20:18:19.099507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.548 [2024-07-24 20:18:19.099532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.548 [2024-07-24 20:18:19.099552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.548 [2024-07-24 20:18:19.099573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.548 [2024-07-24 20:18:19.099592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.548 [2024-07-24 20:18:19.099620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.548 [2024-07-24 20:18:19.099640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.548 [2024-07-24 20:18:19.099661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.548 [2024-07-24 20:18:19.099681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.548 [2024-07-24 20:18:19.099702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.548 [2024-07-24 20:18:19.099720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.548 [2024-07-24 20:18:19.099741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.548 [2024-07-24 20:18:19.099760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.548 [2024-07-24 20:18:19.099780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.548 [2024-07-24 20:18:19.099798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.548 [2024-07-24 20:18:19.099819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.548 [2024-07-24 20:18:19.099837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.548 [2024-07-24 20:18:19.099858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.548 [2024-07-24 20:18:19.099877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.548 [2024-07-24 20:18:19.099915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.548 [2024-07-24 20:18:19.099933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.548 [2024-07-24 20:18:19.099970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.548 [2024-07-24 20:18:19.099989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.548 [2024-07-24 20:18:19.100011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.548 [2024-07-24 20:18:19.100029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.100050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.100068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.100089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.100107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.100128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.100153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.100175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.100193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.100214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.100233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.100254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.100271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.100293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.100311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.100332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.100351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.100372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.100390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.100411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.100439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.100479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.100498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.100518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.100553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.100575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.100594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.100615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.100633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.100654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.100673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.100700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.100721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.100742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.100760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.100780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.100799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.100819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.100838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.100858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.100877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.100897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.100915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.100936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.100954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.100975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.100993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.101014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.101032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.101053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.101072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.101092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.101110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.101131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.101149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.101170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.549 [2024-07-24 20:18:19.101193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.549 [2024-07-24 20:18:19.101215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.101234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.101255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.101274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.101295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.101313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.101334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.101352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.101372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.101391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.101411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.101438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.101478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.101498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.101518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.101536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.101555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.101573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.101592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.101610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.101630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.101648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.101690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.101709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.101735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.101754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.101775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.101794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.101814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.101833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.101854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.101872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.101907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f680 is same with the state(5) to be set 00:24:15.550 [2024-07-24 20:18:19.103568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.103601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.103628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.103648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.103669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.103688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.103709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.103728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.103748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.103766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.103787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.103805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.103826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.103844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.103865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.103883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.103909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.103930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.103950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.103969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.103989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.104008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.550 [2024-07-24 20:18:19.104028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.550 [2024-07-24 20:18:19.104047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.104068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.104086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.104107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.104125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.104146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.104164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.104185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.104204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.104224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.104243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.104263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.104281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.104302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.104320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.104341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.104359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.104380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.104403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.104424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.104455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.104477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.104495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.104517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.104535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.104556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.104575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.104595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.104614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.104634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.104652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.104672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.104690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.104711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.104729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.104749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.104767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.104788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.104806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.104830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.104849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.104871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.104889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.104910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.104935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.104956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.104975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.104997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.105016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.105037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.105056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.105077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.105096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.105117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.105135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.105156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.105174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.551 [2024-07-24 20:18:19.105195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.551 [2024-07-24 20:18:19.105214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.552 [2024-07-24 20:18:19.105235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.552 [2024-07-24 20:18:19.105253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.552 [2024-07-24 20:18:19.105274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.552 [2024-07-24 20:18:19.105293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.552 [2024-07-24 20:18:19.105314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.552 [2024-07-24 20:18:19.105332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.552 [2024-07-24 20:18:19.105353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.552 [2024-07-24 20:18:19.105374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.552 [2024-07-24 20:18:19.105396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.552 [2024-07-24 20:18:19.105415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.552 [2024-07-24 20:18:19.105449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.552 [2024-07-24 20:18:19.105469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.552 [2024-07-24 20:18:19.105490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.552 [2024-07-24 20:18:19.105509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.552 [2024-07-24 20:18:19.105530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.552 [2024-07-24 20:18:19.105549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.552 [2024-07-24 20:18:19.105569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.552 [2024-07-24 20:18:19.105587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.552 [2024-07-24 20:18:19.105607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.552 [2024-07-24 20:18:19.105626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.552 [2024-07-24 20:18:19.105646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.552 [2024-07-24 20:18:19.105665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.552 [2024-07-24 20:18:19.105686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.552 [2024-07-24 20:18:19.105704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.552 [2024-07-24 20:18:19.105725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.552 [2024-07-24 20:18:19.105744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.552 [2024-07-24 20:18:19.105764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.552 [2024-07-24 20:18:19.105783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.552 [2024-07-24 20:18:19.105804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.552 [2024-07-24 20:18:19.105822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.552 [2024-07-24 20:18:19.105843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.552 [2024-07-24 20:18:19.105863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.552 [2024-07-24 20:18:19.105885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.552 [2024-07-24 20:18:19.105903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.552 [2024-07-24 20:18:19.105924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.552 [2024-07-24 20:18:19.105948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.552 [2024-07-24 20:18:19.105969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.552 [2024-07-24 20:18:19.105988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.552 [2024-07-24 20:18:19.106009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.552 [2024-07-24 20:18:19.106027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.552 [2024-07-24 20:18:19.106049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.552 [2024-07-24 20:18:19.106068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.552 [2024-07-24 20:18:19.106089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.552 [2024-07-24 20:18:19.106108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.552 [2024-07-24 20:18:19.106130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.552 [2024-07-24 20:18:19.106149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.553 [2024-07-24 20:18:19.106168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4750 is same with the state(5) to be set 00:24:15.553 [2024-07-24 20:18:19.109114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:15.553 [2024-07-24 20:18:19.109158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:15.553 [2024-07-24 20:18:19.109183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:15.553 [2024-07-24 20:18:19.109205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.553 [2024-07-24 20:18:19.109227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:15.553 [2024-07-24 20:18:19.109251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:15.553 [2024-07-24 20:18:19.109274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.553 [2024-07-24 20:18:19.109378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22daf90 (9): Bad file descriptor 00:24:15.553 [2024-07-24 20:18:19.109464] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.553 [2024-07-24 20:18:19.109500] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.553 [2024-07-24 20:18:19.109548] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.553 [2024-07-24 20:18:19.109636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:15.553 task offset: 24448 on job bdev=Nvme5n1 fails 00:24:15.553 00:24:15.553 Latency(us) 00:24:15.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.553 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.553 Job: Nvme1n1 ended in about 1.00 seconds with error 00:24:15.553 Verification LBA range: start 0x0 length 0x400 00:24:15.553 Nvme1n1 : 1.00 127.51 7.97 63.75 0.00 330003.34 24078.41 344865.00 00:24:15.553 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.553 Job: Nvme2n1 ended in about 1.01 seconds with error 00:24:15.553 Verification LBA range: start 0x0 length 0x400 00:24:15.553 Nvme2n1 : 1.01 126.95 7.93 63.47 0.00 323018.15 45826.65 302921.96 00:24:15.553 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.553 Job: Nvme3n1 ended in about 1.01 seconds with error 00:24:15.553 Verification LBA range: start 0x0 length 0x400 00:24:15.553 Nvme3n1 : 1.01 126.40 7.90 63.20 0.00 316208.67 26796.94 323116.75 00:24:15.553 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.553 Job: Nvme4n1 ended in about 1.05 seconds with error 00:24:15.553 Verification LBA range: start 0x0 length 0x400 00:24:15.553 Nvme4n1 : 1.05 121.83 7.61 60.92 0.00 320662.69 44079.03 279620.27 00:24:15.553 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.553 Job: Nvme5n1 ended in about 1.00 seconds with error 00:24:15.553 Verification LBA range: start 0x0 length 0x400 00:24:15.553 Nvme5n1 : 1.00 128.51 8.03 64.25 0.00 294179.90 9175.04 329330.54 00:24:15.553 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.553 Job: Nvme6n1 ended in about 1.08 seconds with error 00:24:15.553 Verification LBA range: start 0x0 length 0x400 00:24:15.553 Nvme6n1 : 1.08 118.23 7.39 59.11 0.00 315009.71 45826.65 318456.41 00:24:15.553 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.553 Job: Nvme7n1 ended in about 1.09 seconds with error 00:24:15.553 Verification LBA range: start 0x0 length 0x400 00:24:15.553 Nvme7n1 : 1.09 117.76 7.36 58.88 0.00 308453.07 63691.28 312242.63 00:24:15.553 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.553 Job: Nvme8n1 ended in about 1.08 seconds with error 00:24:15.553 Verification LBA range: start 0x0 length 0x400 00:24:15.553 Nvme8n1 : 1.08 118.82 7.43 59.41 0.00 297222.19 23787.14 346418.44 00:24:15.553 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.553 Verification LBA range: start 0x0 length 0x400 00:24:15.553 Nvme9n1 : 1.05 121.57 7.60 0.00 0.00 421729.47 26214.40 397682.16 00:24:15.553 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.553 Job: Nvme10n1 ended in about 1.04 seconds with error 00:24:15.553 Verification LBA range: start 0x0 length 0x400 00:24:15.553 Nvme10n1 : 1.04 61.46 3.84 61.46 0.00 404134.12 26408.58 382147.70 00:24:15.553 =================================================================================================================== 00:24:15.553 Total : 1169.03 73.06 554.46 0.00 327357.16 9175.04 397682.16 00:24:15.553 [2024-07-24 20:18:19.148376] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:15.553 [2024-07-24 20:18:19.148494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:15.553 [2024-07-24 20:18:19.148933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.553 [2024-07-24 20:18:19.148992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x216fc00 with addr=10.0.0.2, port=4420 00:24:15.553 [2024-07-24 20:18:19.149021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fc00 is same with the state(5) to be set 00:24:15.553 [2024-07-24 20:18:19.149319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.553 [2024-07-24 20:18:19.149354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213fec0 with addr=10.0.0.2, port=4420 00:24:15.553 [2024-07-24 20:18:19.149376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213fec0 is same with the state(5) to be set 00:24:15.553 [2024-07-24 20:18:19.149584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.553 [2024-07-24 20:18:19.149619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22df950 with addr=10.0.0.2, port=4420 00:24:15.553 [2024-07-24 20:18:19.149654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22df950 is same with the state(5) to be set 00:24:15.553 [2024-07-24 20:18:19.149869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.553 [2024-07-24 20:18:19.149903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d01200 with addr=10.0.0.2, port=4420 00:24:15.553 [2024-07-24 20:18:19.149924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d01200 is same with the state(5) to be set 00:24:15.553 [2024-07-24 20:18:19.150088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.553 [2024-07-24 20:18:19.150120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21431c0 with addr=10.0.0.2, port=4420 00:24:15.553 [2024-07-24 20:18:19.150140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21431c0 is same with the state(5) to be set 00:24:15.553 [2024-07-24 20:18:19.150344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.553 [2024-07-24 20:18:19.150376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2142cb0 with addr=10.0.0.2, port=4420 00:24:15.554 [2024-07-24 20:18:19.150396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2142cb0 is same with the state(5) to be set 00:24:15.554 [2024-07-24 20:18:19.150417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:15.554 [2024-07-24 20:18:19.150445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:15.554 [2024-07-24 20:18:19.150468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:15.554 [2024-07-24 20:18:19.151320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:15.554 [2024-07-24 20:18:19.151357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.554 [2024-07-24 20:18:19.151664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.554 [2024-07-24 20:18:19.151702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c15610 with addr=10.0.0.2, port=4420 00:24:15.554 [2024-07-24 20:18:19.151724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c15610 is same with the state(5) to be set 00:24:15.554 [2024-07-24 20:18:19.152025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.554 [2024-07-24 20:18:19.152060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2144460 with addr=10.0.0.2, port=4420 00:24:15.554 [2024-07-24 20:18:19.152081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2144460 is same with the state(5) to be set 00:24:15.554 [2024-07-24 20:18:19.152115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x216fc00 (9): Bad file descriptor 00:24:15.554 [2024-07-24 20:18:19.152146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213fec0 (9): Bad file descriptor 00:24:15.554 [2024-07-24 20:18:19.152170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22df950 (9): Bad file descriptor 00:24:15.554 [2024-07-24 20:18:19.152193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d01200 (9): Bad file descriptor 00:24:15.554 [2024-07-24 20:18:19.152216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21431c0 (9): Bad file descriptor 00:24:15.554 [2024-07-24 20:18:19.152238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2142cb0 (9): Bad file descriptor 00:24:15.554 [2024-07-24 20:18:19.152320] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.554 [2024-07-24 20:18:19.152352] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.554 [2024-07-24 20:18:19.152384] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.554 [2024-07-24 20:18:19.152411] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.554 [2024-07-24 20:18:19.152463] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.554 [2024-07-24 20:18:19.152493] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.554 [2024-07-24 20:18:19.152819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.554 [2024-07-24 20:18:19.152860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22dd790 with addr=10.0.0.2, port=4420 00:24:15.554 [2024-07-24 20:18:19.152881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22dd790 is same with the state(5) to be set 00:24:15.554 [2024-07-24 20:18:19.152906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c15610 (9): Bad file descriptor 00:24:15.554 [2024-07-24 20:18:19.152931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2144460 (9): Bad file descriptor 00:24:15.554 [2024-07-24 20:18:19.152952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:15.554 [2024-07-24 20:18:19.152969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:15.554 [2024-07-24 20:18:19.152986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:15.554 [2024-07-24 20:18:19.153012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:15.554 [2024-07-24 20:18:19.153031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:15.554 [2024-07-24 20:18:19.153049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:15.554 [2024-07-24 20:18:19.153070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:15.554 [2024-07-24 20:18:19.153089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:15.554 [2024-07-24 20:18:19.153106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:15.554 [2024-07-24 20:18:19.153127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.554 [2024-07-24 20:18:19.153144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.554 [2024-07-24 20:18:19.153161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.554 [2024-07-24 20:18:19.153182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:15.554 [2024-07-24 20:18:19.153200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:15.554 [2024-07-24 20:18:19.153216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:15.554 [2024-07-24 20:18:19.153237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:15.554 [2024-07-24 20:18:19.153254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:15.554 [2024-07-24 20:18:19.153271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:15.554 [2024-07-24 20:18:19.153358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:15.554 [2024-07-24 20:18:19.153390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.554 [2024-07-24 20:18:19.153409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.554 [2024-07-24 20:18:19.153444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.554 [2024-07-24 20:18:19.153466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.554 [2024-07-24 20:18:19.153481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.554 [2024-07-24 20:18:19.153496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.554 [2024-07-24 20:18:19.153528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22dd790 (9): Bad file descriptor 00:24:15.554 [2024-07-24 20:18:19.153553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:15.554 [2024-07-24 20:18:19.153570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:15.554 [2024-07-24 20:18:19.153587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:15.554 [2024-07-24 20:18:19.153609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:15.554 [2024-07-24 20:18:19.153627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:15.554 [2024-07-24 20:18:19.153644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:15.554 [2024-07-24 20:18:19.153699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.554 [2024-07-24 20:18:19.153724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.554 [2024-07-24 20:18:19.153972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.554 [2024-07-24 20:18:19.154006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22daf90 with addr=10.0.0.2, port=4420 00:24:15.555 [2024-07-24 20:18:19.154027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf90 is same with the state(5) to be set 00:24:15.555 [2024-07-24 20:18:19.154046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:15.555 [2024-07-24 20:18:19.154063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:15.555 [2024-07-24 20:18:19.154081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:15.555 [2024-07-24 20:18:19.154134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.555 [2024-07-24 20:18:19.154164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22daf90 (9): Bad file descriptor 00:24:15.555 [2024-07-24 20:18:19.154217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:15.555 [2024-07-24 20:18:19.154242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:15.555 [2024-07-24 20:18:19.154260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:15.555 [2024-07-24 20:18:19.154309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.122 20:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:24:16.122 20:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2104968 00:24:17.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2104968) - No such process 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:17.058 rmmod nvme_tcp 00:24:17.058 rmmod nvme_fabrics 00:24:17.058 rmmod nvme_keyring 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.058 20:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.592 20:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:19.592 00:24:19.592 real 0m7.921s 00:24:19.592 user 0m19.540s 00:24:19.592 sys 0m1.677s 00:24:19.592 20:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:19.592 20:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:19.592 ************************************ 00:24:19.592 END TEST nvmf_shutdown_tc3 00:24:19.592 ************************************ 00:24:19.592 20:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:24:19.592 00:24:19.592 real 0m29.155s 00:24:19.592 user 1m20.633s 00:24:19.592 sys 0m7.238s 00:24:19.592 20:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:19.592 20:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:19.592 ************************************ 00:24:19.592 END TEST nvmf_shutdown 00:24:19.592 ************************************ 00:24:19.592 20:18:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:24:19.592 00:24:19.592 real 12m39.022s 00:24:19.592 user 30m10.385s 00:24:19.592 sys 2m59.983s 00:24:19.592 20:18:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:19.592 20:18:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:19.592 ************************************ 00:24:19.592 END TEST nvmf_target_extra 00:24:19.592 ************************************ 00:24:19.592 20:18:22 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:19.592 20:18:22 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:19.592 20:18:22 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:19.592 20:18:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:19.592 ************************************ 00:24:19.592 START TEST nvmf_host 00:24:19.592 ************************************ 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:19.592 * Looking for test storage... 00:24:19.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.592 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.593 ************************************ 00:24:19.593 START TEST nvmf_multicontroller 00:24:19.593 ************************************ 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:19.593 * Looking for test storage... 00:24:19.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:24:19.593 20:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:22.878 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:22.879 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:22.879 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:22.879 Found net devices under 0000:84:00.0: cvl_0_0 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:22.879 Found net devices under 0000:84:00.1: cvl_0_1 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:22.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:22.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:24:22.879 00:24:22.879 --- 10.0.0.2 ping statistics --- 00:24:22.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.879 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:22.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:22.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:24:22.879 00:24:22.879 --- 10.0.0.1 ping statistics --- 00:24:22.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.879 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2107547 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2107547 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2107547 ']' 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:22.879 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:22.879 [2024-07-24 20:18:26.350723] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:24:22.879 [2024-07-24 20:18:26.350825] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.879 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.879 [2024-07-24 20:18:26.445084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:22.879 [2024-07-24 20:18:26.584737] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.879 [2024-07-24 20:18:26.584804] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.879 [2024-07-24 20:18:26.584824] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.880 [2024-07-24 20:18:26.584851] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.880 [2024-07-24 20:18:26.584867] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.880 [2024-07-24 20:18:26.584951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.880 [2024-07-24 20:18:26.585015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:22.880 [2024-07-24 20:18:26.585019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.138 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:23.138 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:24:23.138 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:23.138 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:23.138 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.138 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.138 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:23.138 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.138 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.138 [2024-07-24 20:18:26.745077] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.138 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.138 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:23.138 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.138 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.138 Malloc0 00:24:23.138 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.138 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:23.138 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.138 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.139 [2024-07-24 20:18:26.814995] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.139 [2024-07-24 20:18:26.822808] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.139 Malloc1 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2107689 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2107689 /var/tmp/bdevperf.sock 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2107689 ']' 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:23.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:23.139 20:18:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.706 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:23.706 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:24:23.706 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:23.706 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.706 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.964 NVMe0n1 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.964 1 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.964 request: 00:24:23.964 { 00:24:23.964 "name": "NVMe0", 00:24:23.964 "trtype": "tcp", 00:24:23.964 "traddr": "10.0.0.2", 00:24:23.964 "adrfam": "ipv4", 00:24:23.964 "trsvcid": "4420", 00:24:23.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.964 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:23.964 "hostaddr": "10.0.0.2", 00:24:23.964 "hostsvcid": "60000", 00:24:23.964 "prchk_reftag": false, 00:24:23.964 "prchk_guard": false, 00:24:23.964 "hdgst": false, 00:24:23.964 "ddgst": false, 00:24:23.964 "method": "bdev_nvme_attach_controller", 00:24:23.964 "req_id": 1 00:24:23.964 } 00:24:23.964 Got JSON-RPC error response 00:24:23.964 response: 00:24:23.964 { 00:24:23.964 "code": -114, 00:24:23.964 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:23.964 } 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.964 request: 00:24:23.964 { 00:24:23.964 "name": "NVMe0", 00:24:23.964 "trtype": "tcp", 00:24:23.964 "traddr": "10.0.0.2", 00:24:23.964 "adrfam": "ipv4", 00:24:23.964 "trsvcid": "4420", 00:24:23.964 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:23.964 "hostaddr": "10.0.0.2", 00:24:23.964 "hostsvcid": "60000", 00:24:23.964 "prchk_reftag": false, 00:24:23.964 "prchk_guard": false, 00:24:23.964 "hdgst": false, 00:24:23.964 "ddgst": false, 00:24:23.964 "method": "bdev_nvme_attach_controller", 00:24:23.964 "req_id": 1 00:24:23.964 } 00:24:23.964 Got JSON-RPC error response 00:24:23.964 response: 00:24:23.964 { 00:24:23.964 "code": -114, 00:24:23.964 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:23.964 } 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:23.964 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.965 request: 00:24:23.965 { 00:24:23.965 "name": "NVMe0", 00:24:23.965 "trtype": "tcp", 00:24:23.965 "traddr": "10.0.0.2", 00:24:23.965 "adrfam": "ipv4", 00:24:23.965 "trsvcid": "4420", 00:24:23.965 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.965 "hostaddr": "10.0.0.2", 00:24:23.965 "hostsvcid": "60000", 00:24:23.965 "prchk_reftag": false, 00:24:23.965 "prchk_guard": false, 00:24:23.965 "hdgst": false, 00:24:23.965 "ddgst": false, 00:24:23.965 "multipath": "disable", 00:24:23.965 "method": "bdev_nvme_attach_controller", 00:24:23.965 "req_id": 1 00:24:23.965 } 00:24:23.965 Got JSON-RPC error response 00:24:23.965 response: 00:24:23.965 { 00:24:23.965 "code": -114, 00:24:23.965 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:24:23.965 } 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.965 request: 00:24:23.965 { 00:24:23.965 "name": "NVMe0", 00:24:23.965 "trtype": "tcp", 00:24:23.965 "traddr": "10.0.0.2", 00:24:23.965 "adrfam": "ipv4", 00:24:23.965 "trsvcid": "4420", 00:24:23.965 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.965 "hostaddr": "10.0.0.2", 00:24:23.965 "hostsvcid": "60000", 00:24:23.965 "prchk_reftag": false, 00:24:23.965 "prchk_guard": false, 00:24:23.965 "hdgst": false, 00:24:23.965 "ddgst": false, 00:24:23.965 "multipath": "failover", 00:24:23.965 "method": "bdev_nvme_attach_controller", 00:24:23.965 "req_id": 1 00:24:23.965 } 00:24:23.965 Got JSON-RPC error response 00:24:23.965 response: 00:24:23.965 { 00:24:23.965 "code": -114, 00:24:23.965 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:23.965 } 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.965 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.222 00:24:24.222 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.222 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:24.222 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.222 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.222 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.222 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:24.222 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.222 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.222 00:24:24.222 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.222 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:24.222 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.222 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:24.222 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.222 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.222 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:24.222 20:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:25.618 0 00:24:25.618 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:25.618 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.618 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.618 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.618 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2107689 00:24:25.618 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2107689 ']' 00:24:25.618 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2107689 00:24:25.618 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:24:25.618 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:25.618 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2107689 00:24:25.618 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:25.619 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:25.619 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2107689' 00:24:25.619 killing process with pid 2107689 00:24:25.619 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2107689 00:24:25.619 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2107689 00:24:25.881 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:25.881 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.881 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.881 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.881 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:25.881 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.881 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.881 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.881 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:25.881 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:25.881 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:24:25.881 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:25.881 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:24:25.881 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:24:25.881 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:25.881 [2024-07-24 20:18:26.934776] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:24:25.881 [2024-07-24 20:18:26.934880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2107689 ] 00:24:25.881 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.881 [2024-07-24 20:18:27.013418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.881 [2024-07-24 20:18:27.152070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.881 [2024-07-24 20:18:27.948732] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name c42c35dc-c260-4ae1-bf98-6c8badee6af0 already exists 00:24:25.881 [2024-07-24 20:18:27.948786] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:c42c35dc-c260-4ae1-bf98-6c8badee6af0 alias for bdev NVMe1n1 00:24:25.881 [2024-07-24 20:18:27.948808] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:25.881 Running I/O for 1 seconds... 00:24:25.881 00:24:25.881 Latency(us) 00:24:25.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.881 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:25.881 NVMe0n1 : 1.01 14122.05 55.16 0.00 0.00 9046.01 6505.05 16117.00 00:24:25.881 =================================================================================================================== 00:24:25.881 Total : 14122.05 55.16 0.00 0.00 9046.01 6505.05 16117.00 00:24:25.881 Received shutdown signal, test time was about 1.000000 seconds 00:24:25.881 00:24:25.881 Latency(us) 00:24:25.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.881 =================================================================================================================== 00:24:25.881 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:25.881 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:25.881 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:25.881 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:24:25.882 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:25.882 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:25.882 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:24:25.882 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:25.882 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:24:25.882 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:25.882 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:25.882 rmmod nvme_tcp 00:24:25.882 rmmod nvme_fabrics 00:24:25.882 rmmod nvme_keyring 00:24:25.882 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:25.882 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:24:25.882 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:24:25.882 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2107547 ']' 00:24:25.882 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2107547 00:24:25.882 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2107547 ']' 00:24:25.882 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2107547 00:24:25.882 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:24:25.882 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:25.882 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2107547 00:24:25.882 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:25.882 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:25.882 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2107547' 00:24:25.882 killing process with pid 2107547 00:24:25.882 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2107547 00:24:25.882 20:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2107547 00:24:26.448 20:18:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:26.448 20:18:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:26.448 20:18:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:26.448 20:18:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:26.448 20:18:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:26.448 20:18:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.448 20:18:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.448 20:18:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.350 20:18:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:28.350 00:24:28.350 real 0m8.958s 00:24:28.350 user 0m13.745s 00:24:28.350 sys 0m3.170s 00:24:28.350 20:18:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:28.350 20:18:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:28.350 ************************************ 00:24:28.350 END TEST nvmf_multicontroller 00:24:28.350 ************************************ 00:24:28.350 20:18:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:28.350 20:18:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:28.350 20:18:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:28.350 20:18:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.609 ************************************ 00:24:28.609 START TEST nvmf_aer 00:24:28.609 ************************************ 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:28.609 * Looking for test storage... 00:24:28.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:24:28.609 20:18:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:31.138 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:31.138 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:31.138 Found net devices under 0000:84:00.0: cvl_0_0 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:31.138 Found net devices under 0000:84:00.1: cvl_0_1 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.138 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.395 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.395 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.395 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:31.395 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.395 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.395 20:18:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.395 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:31.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:24:31.395 00:24:31.395 --- 10.0.0.2 ping statistics --- 00:24:31.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.395 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:24:31.395 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:24:31.395 00:24:31.395 --- 10.0.0.1 ping statistics --- 00:24:31.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.395 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:24:31.395 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.395 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:24:31.395 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:31.395 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.395 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:31.395 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:31.395 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.395 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:31.395 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:31.395 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:31.395 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:31.395 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:31.395 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.395 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2110040 00:24:31.395 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:31.395 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2110040 00:24:31.396 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 2110040 ']' 00:24:31.396 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.396 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:31.396 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.396 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:31.396 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.396 [2024-07-24 20:18:35.113166] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:24:31.396 [2024-07-24 20:18:35.113267] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.396 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.653 [2024-07-24 20:18:35.219848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:31.653 [2024-07-24 20:18:35.427282] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.653 [2024-07-24 20:18:35.427387] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.653 [2024-07-24 20:18:35.427423] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.653 [2024-07-24 20:18:35.427481] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.653 [2024-07-24 20:18:35.427520] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.653 [2024-07-24 20:18:35.427612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.653 [2024-07-24 20:18:35.427675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.653 [2024-07-24 20:18:35.427712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:31.653 [2024-07-24 20:18:35.427716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.910 [2024-07-24 20:18:35.608783] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.910 Malloc0 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.910 [2024-07-24 20:18:35.667517] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.910 [ 00:24:31.910 { 00:24:31.910 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:31.910 "subtype": "Discovery", 00:24:31.910 "listen_addresses": [], 00:24:31.910 "allow_any_host": true, 00:24:31.910 "hosts": [] 00:24:31.910 }, 00:24:31.910 { 00:24:31.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:31.910 "subtype": "NVMe", 00:24:31.910 "listen_addresses": [ 00:24:31.910 { 00:24:31.910 "trtype": "TCP", 00:24:31.910 "adrfam": "IPv4", 00:24:31.910 "traddr": "10.0.0.2", 00:24:31.910 "trsvcid": "4420" 00:24:31.910 } 00:24:31.910 ], 00:24:31.910 "allow_any_host": true, 00:24:31.910 "hosts": [], 00:24:31.910 "serial_number": "SPDK00000000000001", 00:24:31.910 "model_number": "SPDK bdev Controller", 00:24:31.910 "max_namespaces": 2, 00:24:31.910 "min_cntlid": 1, 00:24:31.910 "max_cntlid": 65519, 00:24:31.910 "namespaces": [ 00:24:31.910 { 00:24:31.910 "nsid": 1, 00:24:31.910 "bdev_name": "Malloc0", 00:24:31.910 "name": "Malloc0", 00:24:31.910 "nguid": "0DA1AFEAF8D541C9A0562235B05B2E9E", 00:24:31.910 "uuid": "0da1afea-f8d5-41c9-a056-2235b05b2e9e" 00:24:31.910 } 00:24:31.910 ] 00:24:31.910 } 00:24:31.910 ] 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2110072 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:24:31.910 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:32.167 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.167 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:32.167 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:24:32.167 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:24:32.167 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:32.167 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:32.167 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:32.167 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:24:32.167 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:32.167 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.167 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:32.167 Malloc1 00:24:32.167 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.167 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:32.167 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.167 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:32.425 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.425 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:32.425 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.425 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:32.425 [ 00:24:32.425 { 00:24:32.425 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:32.425 "subtype": "Discovery", 00:24:32.425 "listen_addresses": [], 00:24:32.425 "allow_any_host": true, 00:24:32.425 "hosts": [] 00:24:32.425 }, 00:24:32.425 { 00:24:32.425 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.425 "subtype": "NVMe", 00:24:32.425 "listen_addresses": [ 00:24:32.425 { 00:24:32.425 "trtype": "TCP", 00:24:32.425 "adrfam": "IPv4", 00:24:32.425 "traddr": "10.0.0.2", 00:24:32.425 "trsvcid": "4420" 00:24:32.425 } 00:24:32.425 ], 00:24:32.425 "allow_any_host": true, 00:24:32.425 "hosts": [], 00:24:32.425 "serial_number": "SPDK00000000000001", 00:24:32.425 "model_number": "SPDK bdev Controller", 00:24:32.425 "max_namespaces": 2, 00:24:32.425 "min_cntlid": 1, 00:24:32.425 "max_cntlid": 65519, 00:24:32.425 "namespaces": [ 00:24:32.425 { 00:24:32.425 "nsid": 1, 00:24:32.425 "bdev_name": "Malloc0", 00:24:32.425 "name": "Malloc0", 00:24:32.425 "nguid": "0DA1AFEAF8D541C9A0562235B05B2E9E", 00:24:32.425 "uuid": "0da1afea-f8d5-41c9-a056-2235b05b2e9e" 00:24:32.425 }, 00:24:32.425 { 00:24:32.425 "nsid": 2, 00:24:32.425 "bdev_name": "Malloc1", 00:24:32.425 "name": "Malloc1", 00:24:32.425 "nguid": "EB77300D0AE94FB2A4E3795024C1BD3C", 00:24:32.425 "uuid": "eb77300d-0ae9-4fb2-a4e3-795024c1bd3c" 00:24:32.425 } 00:24:32.425 ] 00:24:32.425 } 00:24:32.425 ] 00:24:32.425 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.425 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2110072 00:24:32.425 Asynchronous Event Request test 00:24:32.425 Attaching to 10.0.0.2 00:24:32.425 Attached to 10.0.0.2 00:24:32.425 Registering asynchronous event callbacks... 00:24:32.425 Starting namespace attribute notice tests for all controllers... 00:24:32.425 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:32.425 aer_cb - Changed Namespace 00:24:32.425 Cleaning up... 00:24:32.425 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:32.425 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.425 20:18:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:32.425 rmmod nvme_tcp 00:24:32.425 rmmod nvme_fabrics 00:24:32.425 rmmod nvme_keyring 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2110040 ']' 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2110040 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 2110040 ']' 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 2110040 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2110040 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2110040' 00:24:32.425 killing process with pid 2110040 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 2110040 00:24:32.425 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 2110040 00:24:32.991 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:32.991 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:32.991 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:32.991 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:32.991 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:32.991 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.991 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.991 20:18:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.893 20:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:34.893 00:24:34.893 real 0m6.453s 00:24:34.893 user 0m5.075s 00:24:34.893 sys 0m2.731s 00:24:34.893 20:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:34.893 20:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:34.893 ************************************ 00:24:34.893 END TEST nvmf_aer 00:24:34.893 ************************************ 00:24:34.893 20:18:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:34.893 20:18:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:34.893 20:18:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:34.893 20:18:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.893 ************************************ 00:24:34.893 START TEST nvmf_async_init 00:24:34.893 ************************************ 00:24:34.893 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:35.152 * Looking for test storage... 00:24:35.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=97aa145add6a4e7eb021b4375f56e132 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:24:35.152 20:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:38.482 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.482 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:24:38.482 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:38.482 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:38.482 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:38.482 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:38.482 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:38.482 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:24:38.482 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:38.482 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:24:38.482 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:24:38.482 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:24:38.482 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:24:38.482 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:24:38.482 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:24:38.482 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.482 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.482 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.482 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:38.483 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:38.483 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:38.483 Found net devices under 0000:84:00.0: cvl_0_0 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:38.483 Found net devices under 0000:84:00.1: cvl_0_1 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:38.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:24:38.483 00:24:38.483 --- 10.0.0.2 ping statistics --- 00:24:38.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.483 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:24:38.483 00:24:38.483 --- 10.0.0.1 ping statistics --- 00:24:38.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.483 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2112153 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2112153 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 2112153 ']' 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:38.483 20:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:38.483 [2024-07-24 20:18:41.932003] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:24:38.483 [2024-07-24 20:18:41.932101] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.483 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.484 [2024-07-24 20:18:42.022721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.484 [2024-07-24 20:18:42.163861] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.484 [2024-07-24 20:18:42.163950] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.484 [2024-07-24 20:18:42.163971] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.484 [2024-07-24 20:18:42.163988] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.484 [2024-07-24 20:18:42.164002] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.484 [2024-07-24 20:18:42.164043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:38.743 [2024-07-24 20:18:42.357932] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:38.743 null0 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 97aa145add6a4e7eb021b4375f56e132 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:38.743 [2024-07-24 20:18:42.403598] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.743 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.002 nvme0n1 00:24:39.002 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.002 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:39.002 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.002 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.002 [ 00:24:39.002 { 00:24:39.002 "name": "nvme0n1", 00:24:39.002 "aliases": [ 00:24:39.002 "97aa145a-dd6a-4e7e-b021-b4375f56e132" 00:24:39.002 ], 00:24:39.002 "product_name": "NVMe disk", 00:24:39.002 "block_size": 512, 00:24:39.002 "num_blocks": 2097152, 00:24:39.002 "uuid": "97aa145a-dd6a-4e7e-b021-b4375f56e132", 00:24:39.002 "assigned_rate_limits": { 00:24:39.002 "rw_ios_per_sec": 0, 00:24:39.002 "rw_mbytes_per_sec": 0, 00:24:39.002 "r_mbytes_per_sec": 0, 00:24:39.002 "w_mbytes_per_sec": 0 00:24:39.002 }, 00:24:39.002 "claimed": false, 00:24:39.002 "zoned": false, 00:24:39.002 "supported_io_types": { 00:24:39.002 "read": true, 00:24:39.002 "write": true, 00:24:39.002 "unmap": false, 00:24:39.002 "flush": true, 00:24:39.002 "reset": true, 00:24:39.002 "nvme_admin": true, 00:24:39.002 "nvme_io": true, 00:24:39.002 "nvme_io_md": false, 00:24:39.002 "write_zeroes": true, 00:24:39.002 "zcopy": false, 00:24:39.002 "get_zone_info": false, 00:24:39.002 "zone_management": false, 00:24:39.002 "zone_append": false, 00:24:39.002 "compare": true, 00:24:39.002 "compare_and_write": true, 00:24:39.002 "abort": true, 00:24:39.002 "seek_hole": false, 00:24:39.002 "seek_data": false, 00:24:39.002 "copy": true, 00:24:39.002 "nvme_iov_md": false 00:24:39.002 }, 00:24:39.002 "memory_domains": [ 00:24:39.002 { 00:24:39.002 "dma_device_id": "system", 00:24:39.002 "dma_device_type": 1 00:24:39.002 } 00:24:39.002 ], 00:24:39.002 "driver_specific": { 00:24:39.002 "nvme": [ 00:24:39.002 { 00:24:39.002 "trid": { 00:24:39.002 "trtype": "TCP", 00:24:39.002 "adrfam": "IPv4", 00:24:39.002 "traddr": "10.0.0.2", 00:24:39.002 "trsvcid": "4420", 00:24:39.002 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:39.002 }, 00:24:39.002 "ctrlr_data": { 00:24:39.002 "cntlid": 1, 00:24:39.002 "vendor_id": "0x8086", 00:24:39.002 "model_number": "SPDK bdev Controller", 00:24:39.002 "serial_number": "00000000000000000000", 00:24:39.002 "firmware_revision": "24.09", 00:24:39.002 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:39.002 "oacs": { 00:24:39.002 "security": 0, 00:24:39.002 "format": 0, 00:24:39.002 "firmware": 0, 00:24:39.002 "ns_manage": 0 00:24:39.002 }, 00:24:39.002 "multi_ctrlr": true, 00:24:39.002 "ana_reporting": false 00:24:39.002 }, 00:24:39.002 "vs": { 00:24:39.002 "nvme_version": "1.3" 00:24:39.002 }, 00:24:39.002 "ns_data": { 00:24:39.002 "id": 1, 00:24:39.002 "can_share": true 00:24:39.002 } 00:24:39.002 } 00:24:39.002 ], 00:24:39.002 "mp_policy": "active_passive" 00:24:39.002 } 00:24:39.002 } 00:24:39.002 ] 00:24:39.002 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.002 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:39.002 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.002 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.002 [2024-07-24 20:18:42.671068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:39.002 [2024-07-24 20:18:42.671265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393700 (9): Bad file descriptor 00:24:39.261 [2024-07-24 20:18:42.815816] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:39.261 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.261 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:39.261 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.261 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.261 [ 00:24:39.261 { 00:24:39.261 "name": "nvme0n1", 00:24:39.261 "aliases": [ 00:24:39.261 "97aa145a-dd6a-4e7e-b021-b4375f56e132" 00:24:39.261 ], 00:24:39.261 "product_name": "NVMe disk", 00:24:39.261 "block_size": 512, 00:24:39.261 "num_blocks": 2097152, 00:24:39.261 "uuid": "97aa145a-dd6a-4e7e-b021-b4375f56e132", 00:24:39.261 "assigned_rate_limits": { 00:24:39.261 "rw_ios_per_sec": 0, 00:24:39.261 "rw_mbytes_per_sec": 0, 00:24:39.261 "r_mbytes_per_sec": 0, 00:24:39.261 "w_mbytes_per_sec": 0 00:24:39.261 }, 00:24:39.261 "claimed": false, 00:24:39.261 "zoned": false, 00:24:39.261 "supported_io_types": { 00:24:39.261 "read": true, 00:24:39.261 "write": true, 00:24:39.261 "unmap": false, 00:24:39.261 "flush": true, 00:24:39.261 "reset": true, 00:24:39.261 "nvme_admin": true, 00:24:39.261 "nvme_io": true, 00:24:39.261 "nvme_io_md": false, 00:24:39.261 "write_zeroes": true, 00:24:39.261 "zcopy": false, 00:24:39.261 "get_zone_info": false, 00:24:39.261 "zone_management": false, 00:24:39.261 "zone_append": false, 00:24:39.261 "compare": true, 00:24:39.261 "compare_and_write": true, 00:24:39.261 "abort": true, 00:24:39.261 "seek_hole": false, 00:24:39.261 "seek_data": false, 00:24:39.261 "copy": true, 00:24:39.261 "nvme_iov_md": false 00:24:39.261 }, 00:24:39.261 "memory_domains": [ 00:24:39.261 { 00:24:39.261 "dma_device_id": "system", 00:24:39.261 "dma_device_type": 1 00:24:39.261 } 00:24:39.261 ], 00:24:39.261 "driver_specific": { 00:24:39.261 "nvme": [ 00:24:39.261 { 00:24:39.261 "trid": { 00:24:39.261 "trtype": "TCP", 00:24:39.261 "adrfam": "IPv4", 00:24:39.261 "traddr": "10.0.0.2", 00:24:39.261 "trsvcid": "4420", 00:24:39.261 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:39.261 }, 00:24:39.261 "ctrlr_data": { 00:24:39.261 "cntlid": 2, 00:24:39.261 "vendor_id": "0x8086", 00:24:39.261 "model_number": "SPDK bdev Controller", 00:24:39.261 "serial_number": "00000000000000000000", 00:24:39.261 "firmware_revision": "24.09", 00:24:39.261 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:39.261 "oacs": { 00:24:39.261 "security": 0, 00:24:39.261 "format": 0, 00:24:39.261 "firmware": 0, 00:24:39.261 "ns_manage": 0 00:24:39.261 }, 00:24:39.261 "multi_ctrlr": true, 00:24:39.261 "ana_reporting": false 00:24:39.261 }, 00:24:39.261 "vs": { 00:24:39.261 "nvme_version": "1.3" 00:24:39.261 }, 00:24:39.261 "ns_data": { 00:24:39.261 "id": 1, 00:24:39.261 "can_share": true 00:24:39.261 } 00:24:39.261 } 00:24:39.261 ], 00:24:39.261 "mp_policy": "active_passive" 00:24:39.261 } 00:24:39.261 } 00:24:39.261 ] 00:24:39.261 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.261 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.261 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.261 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.261 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.261 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:39.261 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.4dvIHwMsAQ 00:24:39.261 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:39.261 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.4dvIHwMsAQ 00:24:39.261 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:39.261 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.261 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.261 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.261 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:39.261 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.261 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.262 [2024-07-24 20:18:42.880403] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:39.262 [2024-07-24 20:18:42.880622] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:39.262 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.262 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4dvIHwMsAQ 00:24:39.262 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.262 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.262 [2024-07-24 20:18:42.888419] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:39.262 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.262 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4dvIHwMsAQ 00:24:39.262 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.262 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.262 [2024-07-24 20:18:42.900491] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:39.262 [2024-07-24 20:18:42.900579] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:39.262 nvme0n1 00:24:39.262 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.262 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:39.262 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.262 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.262 [ 00:24:39.262 { 00:24:39.262 "name": "nvme0n1", 00:24:39.262 "aliases": [ 00:24:39.262 "97aa145a-dd6a-4e7e-b021-b4375f56e132" 00:24:39.262 ], 00:24:39.262 "product_name": "NVMe disk", 00:24:39.262 "block_size": 512, 00:24:39.262 "num_blocks": 2097152, 00:24:39.262 "uuid": "97aa145a-dd6a-4e7e-b021-b4375f56e132", 00:24:39.262 "assigned_rate_limits": { 00:24:39.262 "rw_ios_per_sec": 0, 00:24:39.262 "rw_mbytes_per_sec": 0, 00:24:39.262 "r_mbytes_per_sec": 0, 00:24:39.262 "w_mbytes_per_sec": 0 00:24:39.262 }, 00:24:39.262 "claimed": false, 00:24:39.262 "zoned": false, 00:24:39.262 "supported_io_types": { 00:24:39.262 "read": true, 00:24:39.262 "write": true, 00:24:39.262 "unmap": false, 00:24:39.262 "flush": true, 00:24:39.262 "reset": true, 00:24:39.262 "nvme_admin": true, 00:24:39.262 "nvme_io": true, 00:24:39.262 "nvme_io_md": false, 00:24:39.262 "write_zeroes": true, 00:24:39.262 "zcopy": false, 00:24:39.262 "get_zone_info": false, 00:24:39.262 "zone_management": false, 00:24:39.262 "zone_append": false, 00:24:39.262 "compare": true, 00:24:39.262 "compare_and_write": true, 00:24:39.262 "abort": true, 00:24:39.262 "seek_hole": false, 00:24:39.262 "seek_data": false, 00:24:39.262 "copy": true, 00:24:39.262 "nvme_iov_md": false 00:24:39.262 }, 00:24:39.262 "memory_domains": [ 00:24:39.262 { 00:24:39.262 "dma_device_id": "system", 00:24:39.262 "dma_device_type": 1 00:24:39.262 } 00:24:39.262 ], 00:24:39.262 "driver_specific": { 00:24:39.262 "nvme": [ 00:24:39.262 { 00:24:39.262 "trid": { 00:24:39.262 "trtype": "TCP", 00:24:39.262 "adrfam": "IPv4", 00:24:39.262 "traddr": "10.0.0.2", 00:24:39.262 "trsvcid": "4421", 00:24:39.262 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:39.262 }, 00:24:39.262 "ctrlr_data": { 00:24:39.262 "cntlid": 3, 00:24:39.262 "vendor_id": "0x8086", 00:24:39.262 "model_number": "SPDK bdev Controller", 00:24:39.262 "serial_number": "00000000000000000000", 00:24:39.262 "firmware_revision": "24.09", 00:24:39.262 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:39.262 "oacs": { 00:24:39.262 "security": 0, 00:24:39.262 "format": 0, 00:24:39.262 "firmware": 0, 00:24:39.262 "ns_manage": 0 00:24:39.262 }, 00:24:39.262 "multi_ctrlr": true, 00:24:39.262 "ana_reporting": false 00:24:39.262 }, 00:24:39.262 "vs": { 00:24:39.262 "nvme_version": "1.3" 00:24:39.262 }, 00:24:39.262 "ns_data": { 00:24:39.262 "id": 1, 00:24:39.262 "can_share": true 00:24:39.262 } 00:24:39.262 } 00:24:39.262 ], 00:24:39.262 "mp_policy": "active_passive" 00:24:39.262 } 00:24:39.262 } 00:24:39.262 ] 00:24:39.262 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.262 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.262 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.262 20:18:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.262 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.262 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.4dvIHwMsAQ 00:24:39.262 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:39.262 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:24:39.262 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:39.262 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:24:39.262 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:39.262 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:24:39.262 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:39.262 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:39.262 rmmod nvme_tcp 00:24:39.262 rmmod nvme_fabrics 00:24:39.521 rmmod nvme_keyring 00:24:39.521 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:39.521 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:24:39.521 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:24:39.521 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2112153 ']' 00:24:39.521 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2112153 00:24:39.521 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 2112153 ']' 00:24:39.521 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 2112153 00:24:39.521 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:24:39.521 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:39.521 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2112153 00:24:39.521 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:39.521 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:39.521 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2112153' 00:24:39.521 killing process with pid 2112153 00:24:39.521 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 2112153 00:24:39.521 [2024-07-24 20:18:43.136760] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:39.521 [2024-07-24 20:18:43.136842] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:39.521 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 2112153 00:24:39.781 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:39.781 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:39.781 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:39.781 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:39.781 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:39.781 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.781 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:39.781 20:18:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:42.313 00:24:42.313 real 0m6.903s 00:24:42.313 user 0m2.669s 00:24:42.313 sys 0m2.810s 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:42.313 ************************************ 00:24:42.313 END TEST nvmf_async_init 00:24:42.313 ************************************ 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.313 ************************************ 00:24:42.313 START TEST dma 00:24:42.313 ************************************ 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:42.313 * Looking for test storage... 00:24:42.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:42.313 00:24:42.313 real 0m0.107s 00:24:42.313 user 0m0.049s 00:24:42.313 sys 0m0.065s 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:42.313 ************************************ 00:24:42.313 END TEST dma 00:24:42.313 ************************************ 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.313 ************************************ 00:24:42.313 START TEST nvmf_identify 00:24:42.313 ************************************ 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:42.313 * Looking for test storage... 00:24:42.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:42.313 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:42.314 20:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:44.846 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:44.846 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:44.846 Found net devices under 0000:84:00.0: cvl_0_0 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:44.846 Found net devices under 0000:84:00.1: cvl_0_1 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:44.846 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:45.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:24:45.105 00:24:45.105 --- 10.0.0.2 ping statistics --- 00:24:45.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.105 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:45.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:24:45.105 00:24:45.105 --- 10.0.0.1 ping statistics --- 00:24:45.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.105 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2114422 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2114422 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 2114422 ']' 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:45.105 20:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:45.105 [2024-07-24 20:18:48.784045] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:24:45.105 [2024-07-24 20:18:48.784217] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.105 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.364 [2024-07-24 20:18:48.913836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:45.364 [2024-07-24 20:18:49.067139] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.364 [2024-07-24 20:18:49.067206] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.364 [2024-07-24 20:18:49.067226] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.364 [2024-07-24 20:18:49.067244] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.364 [2024-07-24 20:18:49.067259] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.364 [2024-07-24 20:18:49.067345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.364 [2024-07-24 20:18:49.067409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.364 [2024-07-24 20:18:49.067486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:45.364 [2024-07-24 20:18:49.067491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.739 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:46.739 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:24:46.739 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:46.740 [2024-07-24 20:18:50.150086] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:46.740 Malloc0 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:46.740 [2024-07-24 20:18:50.236863] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:46.740 [ 00:24:46.740 { 00:24:46.740 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:46.740 "subtype": "Discovery", 00:24:46.740 "listen_addresses": [ 00:24:46.740 { 00:24:46.740 "trtype": "TCP", 00:24:46.740 "adrfam": "IPv4", 00:24:46.740 "traddr": "10.0.0.2", 00:24:46.740 "trsvcid": "4420" 00:24:46.740 } 00:24:46.740 ], 00:24:46.740 "allow_any_host": true, 00:24:46.740 "hosts": [] 00:24:46.740 }, 00:24:46.740 { 00:24:46.740 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.740 "subtype": "NVMe", 00:24:46.740 "listen_addresses": [ 00:24:46.740 { 00:24:46.740 "trtype": "TCP", 00:24:46.740 "adrfam": "IPv4", 00:24:46.740 "traddr": "10.0.0.2", 00:24:46.740 "trsvcid": "4420" 00:24:46.740 } 00:24:46.740 ], 00:24:46.740 "allow_any_host": true, 00:24:46.740 "hosts": [], 00:24:46.740 "serial_number": "SPDK00000000000001", 00:24:46.740 "model_number": "SPDK bdev Controller", 00:24:46.740 "max_namespaces": 32, 00:24:46.740 "min_cntlid": 1, 00:24:46.740 "max_cntlid": 65519, 00:24:46.740 "namespaces": [ 00:24:46.740 { 00:24:46.740 "nsid": 1, 00:24:46.740 "bdev_name": "Malloc0", 00:24:46.740 "name": "Malloc0", 00:24:46.740 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:46.740 "eui64": "ABCDEF0123456789", 00:24:46.740 "uuid": "4b5f46d9-daa5-4440-ac83-9407ed5e15a5" 00:24:46.740 } 00:24:46.740 ] 00:24:46.740 } 00:24:46.740 ] 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.740 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:46.740 [2024-07-24 20:18:50.281983] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:24:46.740 [2024-07-24 20:18:50.282083] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2114639 ] 00:24:46.740 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.740 [2024-07-24 20:18:50.330628] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:46.740 [2024-07-24 20:18:50.330718] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:46.740 [2024-07-24 20:18:50.330732] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:46.740 [2024-07-24 20:18:50.330753] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:46.740 [2024-07-24 20:18:50.330772] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:46.740 [2024-07-24 20:18:50.331290] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:46.740 [2024-07-24 20:18:50.331374] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x88e540 0 00:24:46.740 [2024-07-24 20:18:50.348440] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:46.740 [2024-07-24 20:18:50.348489] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:46.740 [2024-07-24 20:18:50.348508] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:46.740 [2024-07-24 20:18:50.348518] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:46.740 [2024-07-24 20:18:50.348589] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.740 [2024-07-24 20:18:50.348606] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.740 [2024-07-24 20:18:50.348617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88e540) 00:24:46.740 [2024-07-24 20:18:50.348642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:46.740 [2024-07-24 20:18:50.348679] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee3c0, cid 0, qid 0 00:24:46.740 [2024-07-24 20:18:50.356448] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.740 [2024-07-24 20:18:50.356472] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.740 [2024-07-24 20:18:50.356491] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.740 [2024-07-24 20:18:50.356501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee3c0) on tqpair=0x88e540 00:24:46.740 [2024-07-24 20:18:50.356521] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:46.740 [2024-07-24 20:18:50.356536] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:46.740 [2024-07-24 20:18:50.356549] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:46.740 [2024-07-24 20:18:50.356587] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.740 [2024-07-24 20:18:50.356599] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.740 [2024-07-24 20:18:50.356608] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88e540) 00:24:46.740 [2024-07-24 20:18:50.356623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.740 [2024-07-24 20:18:50.356656] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee3c0, cid 0, qid 0 00:24:46.740 [2024-07-24 20:18:50.356960] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.740 [2024-07-24 20:18:50.356981] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.740 [2024-07-24 20:18:50.356990] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.740 [2024-07-24 20:18:50.356999] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee3c0) on tqpair=0x88e540 00:24:46.740 [2024-07-24 20:18:50.357017] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:46.740 [2024-07-24 20:18:50.357037] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:46.740 [2024-07-24 20:18:50.357053] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.740 [2024-07-24 20:18:50.357063] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.740 [2024-07-24 20:18:50.357071] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88e540) 00:24:46.740 [2024-07-24 20:18:50.357086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.740 [2024-07-24 20:18:50.357115] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee3c0, cid 0, qid 0 00:24:46.740 [2024-07-24 20:18:50.357289] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.740 [2024-07-24 20:18:50.357310] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.740 [2024-07-24 20:18:50.357319] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.740 [2024-07-24 20:18:50.357328] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee3c0) on tqpair=0x88e540 00:24:46.740 [2024-07-24 20:18:50.357339] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:46.740 [2024-07-24 20:18:50.357365] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:46.740 [2024-07-24 20:18:50.357383] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.740 [2024-07-24 20:18:50.357392] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.740 [2024-07-24 20:18:50.357401] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88e540) 00:24:46.741 [2024-07-24 20:18:50.357415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.741 [2024-07-24 20:18:50.357453] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee3c0, cid 0, qid 0 00:24:46.741 [2024-07-24 20:18:50.357688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.741 [2024-07-24 20:18:50.357704] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.741 [2024-07-24 20:18:50.357713] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.357722] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee3c0) on tqpair=0x88e540 00:24:46.741 [2024-07-24 20:18:50.357735] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:46.741 [2024-07-24 20:18:50.357757] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.357768] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.357777] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88e540) 00:24:46.741 [2024-07-24 20:18:50.357792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.741 [2024-07-24 20:18:50.357820] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee3c0, cid 0, qid 0 00:24:46.741 [2024-07-24 20:18:50.358033] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.741 [2024-07-24 20:18:50.358049] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.741 [2024-07-24 20:18:50.358058] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.358067] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee3c0) on tqpair=0x88e540 00:24:46.741 [2024-07-24 20:18:50.358079] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:46.741 [2024-07-24 20:18:50.358090] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:46.741 [2024-07-24 20:18:50.358107] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:46.741 [2024-07-24 20:18:50.358220] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:46.741 [2024-07-24 20:18:50.358232] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:46.741 [2024-07-24 20:18:50.358251] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.358261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.358270] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88e540) 00:24:46.741 [2024-07-24 20:18:50.358285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.741 [2024-07-24 20:18:50.358313] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee3c0, cid 0, qid 0 00:24:46.741 [2024-07-24 20:18:50.358585] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.741 [2024-07-24 20:18:50.358606] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.741 [2024-07-24 20:18:50.358615] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.358630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee3c0) on tqpair=0x88e540 00:24:46.741 [2024-07-24 20:18:50.358642] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:46.741 [2024-07-24 20:18:50.358664] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.358676] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.358685] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88e540) 00:24:46.741 [2024-07-24 20:18:50.358699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.741 [2024-07-24 20:18:50.358728] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee3c0, cid 0, qid 0 00:24:46.741 [2024-07-24 20:18:50.358901] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.741 [2024-07-24 20:18:50.358916] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.741 [2024-07-24 20:18:50.358926] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.358934] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee3c0) on tqpair=0x88e540 00:24:46.741 [2024-07-24 20:18:50.358945] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:46.741 [2024-07-24 20:18:50.358956] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:46.741 [2024-07-24 20:18:50.358973] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:46.741 [2024-07-24 20:18:50.358999] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:46.741 [2024-07-24 20:18:50.359022] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.359032] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88e540) 00:24:46.741 [2024-07-24 20:18:50.359047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.741 [2024-07-24 20:18:50.359075] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee3c0, cid 0, qid 0 00:24:46.741 [2024-07-24 20:18:50.359312] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:46.741 [2024-07-24 20:18:50.359333] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:46.741 [2024-07-24 20:18:50.359343] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.359353] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88e540): datao=0, datal=4096, cccid=0 00:24:46.741 [2024-07-24 20:18:50.359363] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8ee3c0) on tqpair(0x88e540): expected_datao=0, payload_size=4096 00:24:46.741 [2024-07-24 20:18:50.359374] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.359389] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.359401] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.359482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.741 [2024-07-24 20:18:50.359502] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.741 [2024-07-24 20:18:50.359511] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.359520] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee3c0) on tqpair=0x88e540 00:24:46.741 [2024-07-24 20:18:50.359537] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:46.741 [2024-07-24 20:18:50.359549] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:46.741 [2024-07-24 20:18:50.359565] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:46.741 [2024-07-24 20:18:50.359578] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:46.741 [2024-07-24 20:18:50.359589] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:46.741 [2024-07-24 20:18:50.359600] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:46.741 [2024-07-24 20:18:50.359620] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:46.741 [2024-07-24 20:18:50.359642] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.359654] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.359662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88e540) 00:24:46.741 [2024-07-24 20:18:50.359684] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:46.741 [2024-07-24 20:18:50.359713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee3c0, cid 0, qid 0 00:24:46.741 [2024-07-24 20:18:50.359929] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.741 [2024-07-24 20:18:50.359944] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.741 [2024-07-24 20:18:50.359953] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.359962] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee3c0) on tqpair=0x88e540 00:24:46.741 [2024-07-24 20:18:50.359978] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.359988] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.359997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88e540) 00:24:46.741 [2024-07-24 20:18:50.360010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.741 [2024-07-24 20:18:50.360025] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.360034] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.360042] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x88e540) 00:24:46.741 [2024-07-24 20:18:50.360054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.741 [2024-07-24 20:18:50.360067] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.360076] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.360085] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x88e540) 00:24:46.741 [2024-07-24 20:18:50.360096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.741 [2024-07-24 20:18:50.360109] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.360118] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.360127] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88e540) 00:24:46.741 [2024-07-24 20:18:50.360139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.741 [2024-07-24 20:18:50.360150] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:46.741 [2024-07-24 20:18:50.360176] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:46.741 [2024-07-24 20:18:50.360197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.741 [2024-07-24 20:18:50.360208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x88e540) 00:24:46.742 [2024-07-24 20:18:50.360223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.742 [2024-07-24 20:18:50.360253] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee3c0, cid 0, qid 0 00:24:46.742 [2024-07-24 20:18:50.360268] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee540, cid 1, qid 0 00:24:46.742 [2024-07-24 20:18:50.360279] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee6c0, cid 2, qid 0 00:24:46.742 [2024-07-24 20:18:50.360290] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee840, cid 3, qid 0 00:24:46.742 [2024-07-24 20:18:50.360300] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee9c0, cid 4, qid 0 00:24:46.742 [2024-07-24 20:18:50.364443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.742 [2024-07-24 20:18:50.364465] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.742 [2024-07-24 20:18:50.364475] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.364484] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee9c0) on tqpair=0x88e540 00:24:46.742 [2024-07-24 20:18:50.364496] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:46.742 [2024-07-24 20:18:50.364509] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:46.742 [2024-07-24 20:18:50.364534] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.364547] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x88e540) 00:24:46.742 [2024-07-24 20:18:50.364562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.742 [2024-07-24 20:18:50.364592] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee9c0, cid 4, qid 0 00:24:46.742 [2024-07-24 20:18:50.364825] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:46.742 [2024-07-24 20:18:50.364845] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:46.742 [2024-07-24 20:18:50.364855] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.364863] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88e540): datao=0, datal=4096, cccid=4 00:24:46.742 [2024-07-24 20:18:50.364874] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8ee9c0) on tqpair(0x88e540): expected_datao=0, payload_size=4096 00:24:46.742 [2024-07-24 20:18:50.364884] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.364898] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.364908] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.364947] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.742 [2024-07-24 20:18:50.364961] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.742 [2024-07-24 20:18:50.364970] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.364979] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee9c0) on tqpair=0x88e540 00:24:46.742 [2024-07-24 20:18:50.365005] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:46.742 [2024-07-24 20:18:50.365054] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.365069] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x88e540) 00:24:46.742 [2024-07-24 20:18:50.365084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.742 [2024-07-24 20:18:50.365105] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.365116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.365124] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x88e540) 00:24:46.742 [2024-07-24 20:18:50.365137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.742 [2024-07-24 20:18:50.365173] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee9c0, cid 4, qid 0 00:24:46.742 [2024-07-24 20:18:50.365189] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8eeb40, cid 5, qid 0 00:24:46.742 [2024-07-24 20:18:50.365479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:46.742 [2024-07-24 20:18:50.365501] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:46.742 [2024-07-24 20:18:50.365510] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.365519] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88e540): datao=0, datal=1024, cccid=4 00:24:46.742 [2024-07-24 20:18:50.365538] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8ee9c0) on tqpair(0x88e540): expected_datao=0, payload_size=1024 00:24:46.742 [2024-07-24 20:18:50.365548] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.365562] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.365571] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.365583] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.742 [2024-07-24 20:18:50.365595] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.742 [2024-07-24 20:18:50.365603] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.365612] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8eeb40) on tqpair=0x88e540 00:24:46.742 [2024-07-24 20:18:50.406634] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.742 [2024-07-24 20:18:50.406660] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.742 [2024-07-24 20:18:50.406670] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.406680] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee9c0) on tqpair=0x88e540 00:24:46.742 [2024-07-24 20:18:50.406705] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.406718] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x88e540) 00:24:46.742 [2024-07-24 20:18:50.406734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.742 [2024-07-24 20:18:50.406774] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee9c0, cid 4, qid 0 00:24:46.742 [2024-07-24 20:18:50.406971] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:46.742 [2024-07-24 20:18:50.406987] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:46.742 [2024-07-24 20:18:50.406996] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.407005] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88e540): datao=0, datal=3072, cccid=4 00:24:46.742 [2024-07-24 20:18:50.407016] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8ee9c0) on tqpair(0x88e540): expected_datao=0, payload_size=3072 00:24:46.742 [2024-07-24 20:18:50.407026] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.407040] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.407050] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.407066] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.742 [2024-07-24 20:18:50.407079] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.742 [2024-07-24 20:18:50.407088] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.407097] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee9c0) on tqpair=0x88e540 00:24:46.742 [2024-07-24 20:18:50.407123] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.407136] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x88e540) 00:24:46.742 [2024-07-24 20:18:50.407151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.742 [2024-07-24 20:18:50.407189] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee9c0, cid 4, qid 0 00:24:46.742 [2024-07-24 20:18:50.407365] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:46.742 [2024-07-24 20:18:50.407381] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:46.742 [2024-07-24 20:18:50.407390] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.407399] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88e540): datao=0, datal=8, cccid=4 00:24:46.742 [2024-07-24 20:18:50.407409] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8ee9c0) on tqpair(0x88e540): expected_datao=0, payload_size=8 00:24:46.742 [2024-07-24 20:18:50.407419] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.407441] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.407453] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.451454] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.742 [2024-07-24 20:18:50.451478] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.742 [2024-07-24 20:18:50.451488] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.742 [2024-07-24 20:18:50.451498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee9c0) on tqpair=0x88e540 00:24:46.742 ===================================================== 00:24:46.742 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:46.742 ===================================================== 00:24:46.742 Controller Capabilities/Features 00:24:46.742 ================================ 00:24:46.742 Vendor ID: 0000 00:24:46.742 Subsystem Vendor ID: 0000 00:24:46.742 Serial Number: .................... 00:24:46.742 Model Number: ........................................ 00:24:46.742 Firmware Version: 24.09 00:24:46.742 Recommended Arb Burst: 0 00:24:46.742 IEEE OUI Identifier: 00 00 00 00:24:46.742 Multi-path I/O 00:24:46.742 May have multiple subsystem ports: No 00:24:46.742 May have multiple controllers: No 00:24:46.742 Associated with SR-IOV VF: No 00:24:46.742 Max Data Transfer Size: 131072 00:24:46.742 Max Number of Namespaces: 0 00:24:46.742 Max Number of I/O Queues: 1024 00:24:46.742 NVMe Specification Version (VS): 1.3 00:24:46.742 NVMe Specification Version (Identify): 1.3 00:24:46.742 Maximum Queue Entries: 128 00:24:46.742 Contiguous Queues Required: Yes 00:24:46.742 Arbitration Mechanisms Supported 00:24:46.742 Weighted Round Robin: Not Supported 00:24:46.742 Vendor Specific: Not Supported 00:24:46.742 Reset Timeout: 15000 ms 00:24:46.742 Doorbell Stride: 4 bytes 00:24:46.742 NVM Subsystem Reset: Not Supported 00:24:46.742 Command Sets Supported 00:24:46.742 NVM Command Set: Supported 00:24:46.742 Boot Partition: Not Supported 00:24:46.742 Memory Page Size Minimum: 4096 bytes 00:24:46.743 Memory Page Size Maximum: 4096 bytes 00:24:46.743 Persistent Memory Region: Not Supported 00:24:46.743 Optional Asynchronous Events Supported 00:24:46.743 Namespace Attribute Notices: Not Supported 00:24:46.743 Firmware Activation Notices: Not Supported 00:24:46.743 ANA Change Notices: Not Supported 00:24:46.743 PLE Aggregate Log Change Notices: Not Supported 00:24:46.743 LBA Status Info Alert Notices: Not Supported 00:24:46.743 EGE Aggregate Log Change Notices: Not Supported 00:24:46.743 Normal NVM Subsystem Shutdown event: Not Supported 00:24:46.743 Zone Descriptor Change Notices: Not Supported 00:24:46.743 Discovery Log Change Notices: Supported 00:24:46.743 Controller Attributes 00:24:46.743 128-bit Host Identifier: Not Supported 00:24:46.743 Non-Operational Permissive Mode: Not Supported 00:24:46.743 NVM Sets: Not Supported 00:24:46.743 Read Recovery Levels: Not Supported 00:24:46.743 Endurance Groups: Not Supported 00:24:46.743 Predictable Latency Mode: Not Supported 00:24:46.743 Traffic Based Keep ALive: Not Supported 00:24:46.743 Namespace Granularity: Not Supported 00:24:46.743 SQ Associations: Not Supported 00:24:46.743 UUID List: Not Supported 00:24:46.743 Multi-Domain Subsystem: Not Supported 00:24:46.743 Fixed Capacity Management: Not Supported 00:24:46.743 Variable Capacity Management: Not Supported 00:24:46.743 Delete Endurance Group: Not Supported 00:24:46.743 Delete NVM Set: Not Supported 00:24:46.743 Extended LBA Formats Supported: Not Supported 00:24:46.743 Flexible Data Placement Supported: Not Supported 00:24:46.743 00:24:46.743 Controller Memory Buffer Support 00:24:46.743 ================================ 00:24:46.743 Supported: No 00:24:46.743 00:24:46.743 Persistent Memory Region Support 00:24:46.743 ================================ 00:24:46.743 Supported: No 00:24:46.743 00:24:46.743 Admin Command Set Attributes 00:24:46.743 ============================ 00:24:46.743 Security Send/Receive: Not Supported 00:24:46.743 Format NVM: Not Supported 00:24:46.743 Firmware Activate/Download: Not Supported 00:24:46.743 Namespace Management: Not Supported 00:24:46.743 Device Self-Test: Not Supported 00:24:46.743 Directives: Not Supported 00:24:46.743 NVMe-MI: Not Supported 00:24:46.743 Virtualization Management: Not Supported 00:24:46.743 Doorbell Buffer Config: Not Supported 00:24:46.743 Get LBA Status Capability: Not Supported 00:24:46.743 Command & Feature Lockdown Capability: Not Supported 00:24:46.743 Abort Command Limit: 1 00:24:46.743 Async Event Request Limit: 4 00:24:46.743 Number of Firmware Slots: N/A 00:24:46.743 Firmware Slot 1 Read-Only: N/A 00:24:46.743 Firmware Activation Without Reset: N/A 00:24:46.743 Multiple Update Detection Support: N/A 00:24:46.743 Firmware Update Granularity: No Information Provided 00:24:46.743 Per-Namespace SMART Log: No 00:24:46.743 Asymmetric Namespace Access Log Page: Not Supported 00:24:46.743 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:46.743 Command Effects Log Page: Not Supported 00:24:46.743 Get Log Page Extended Data: Supported 00:24:46.743 Telemetry Log Pages: Not Supported 00:24:46.743 Persistent Event Log Pages: Not Supported 00:24:46.743 Supported Log Pages Log Page: May Support 00:24:46.743 Commands Supported & Effects Log Page: Not Supported 00:24:46.743 Feature Identifiers & Effects Log Page:May Support 00:24:46.743 NVMe-MI Commands & Effects Log Page: May Support 00:24:46.743 Data Area 4 for Telemetry Log: Not Supported 00:24:46.743 Error Log Page Entries Supported: 128 00:24:46.743 Keep Alive: Not Supported 00:24:46.743 00:24:46.743 NVM Command Set Attributes 00:24:46.743 ========================== 00:24:46.743 Submission Queue Entry Size 00:24:46.743 Max: 1 00:24:46.743 Min: 1 00:24:46.743 Completion Queue Entry Size 00:24:46.743 Max: 1 00:24:46.743 Min: 1 00:24:46.743 Number of Namespaces: 0 00:24:46.743 Compare Command: Not Supported 00:24:46.743 Write Uncorrectable Command: Not Supported 00:24:46.743 Dataset Management Command: Not Supported 00:24:46.743 Write Zeroes Command: Not Supported 00:24:46.743 Set Features Save Field: Not Supported 00:24:46.743 Reservations: Not Supported 00:24:46.743 Timestamp: Not Supported 00:24:46.743 Copy: Not Supported 00:24:46.743 Volatile Write Cache: Not Present 00:24:46.743 Atomic Write Unit (Normal): 1 00:24:46.743 Atomic Write Unit (PFail): 1 00:24:46.743 Atomic Compare & Write Unit: 1 00:24:46.743 Fused Compare & Write: Supported 00:24:46.743 Scatter-Gather List 00:24:46.743 SGL Command Set: Supported 00:24:46.743 SGL Keyed: Supported 00:24:46.743 SGL Bit Bucket Descriptor: Not Supported 00:24:46.743 SGL Metadata Pointer: Not Supported 00:24:46.743 Oversized SGL: Not Supported 00:24:46.743 SGL Metadata Address: Not Supported 00:24:46.743 SGL Offset: Supported 00:24:46.743 Transport SGL Data Block: Not Supported 00:24:46.743 Replay Protected Memory Block: Not Supported 00:24:46.743 00:24:46.743 Firmware Slot Information 00:24:46.743 ========================= 00:24:46.743 Active slot: 0 00:24:46.743 00:24:46.743 00:24:46.743 Error Log 00:24:46.743 ========= 00:24:46.743 00:24:46.743 Active Namespaces 00:24:46.743 ================= 00:24:46.743 Discovery Log Page 00:24:46.743 ================== 00:24:46.743 Generation Counter: 2 00:24:46.743 Number of Records: 2 00:24:46.743 Record Format: 0 00:24:46.743 00:24:46.743 Discovery Log Entry 0 00:24:46.743 ---------------------- 00:24:46.743 Transport Type: 3 (TCP) 00:24:46.743 Address Family: 1 (IPv4) 00:24:46.743 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:46.743 Entry Flags: 00:24:46.743 Duplicate Returned Information: 1 00:24:46.743 Explicit Persistent Connection Support for Discovery: 1 00:24:46.743 Transport Requirements: 00:24:46.743 Secure Channel: Not Required 00:24:46.743 Port ID: 0 (0x0000) 00:24:46.743 Controller ID: 65535 (0xffff) 00:24:46.743 Admin Max SQ Size: 128 00:24:46.743 Transport Service Identifier: 4420 00:24:46.743 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:46.743 Transport Address: 10.0.0.2 00:24:46.743 Discovery Log Entry 1 00:24:46.743 ---------------------- 00:24:46.743 Transport Type: 3 (TCP) 00:24:46.743 Address Family: 1 (IPv4) 00:24:46.743 Subsystem Type: 2 (NVM Subsystem) 00:24:46.743 Entry Flags: 00:24:46.743 Duplicate Returned Information: 0 00:24:46.743 Explicit Persistent Connection Support for Discovery: 0 00:24:46.743 Transport Requirements: 00:24:46.743 Secure Channel: Not Required 00:24:46.743 Port ID: 0 (0x0000) 00:24:46.743 Controller ID: 65535 (0xffff) 00:24:46.743 Admin Max SQ Size: 128 00:24:46.743 Transport Service Identifier: 4420 00:24:46.743 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:46.743 Transport Address: 10.0.0.2 [2024-07-24 20:18:50.451655] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:46.743 [2024-07-24 20:18:50.451685] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee3c0) on tqpair=0x88e540 00:24:46.743 [2024-07-24 20:18:50.451702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.743 [2024-07-24 20:18:50.451715] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee540) on tqpair=0x88e540 00:24:46.743 [2024-07-24 20:18:50.451725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.743 [2024-07-24 20:18:50.451737] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee6c0) on tqpair=0x88e540 00:24:46.743 [2024-07-24 20:18:50.451747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.743 [2024-07-24 20:18:50.451758] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee840) on tqpair=0x88e540 00:24:46.743 [2024-07-24 20:18:50.451768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.743 [2024-07-24 20:18:50.451793] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.743 [2024-07-24 20:18:50.451805] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.743 [2024-07-24 20:18:50.451815] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88e540) 00:24:46.743 [2024-07-24 20:18:50.451830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.743 [2024-07-24 20:18:50.451864] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee840, cid 3, qid 0 00:24:46.743 [2024-07-24 20:18:50.452100] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.743 [2024-07-24 20:18:50.452117] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.743 [2024-07-24 20:18:50.452126] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.743 [2024-07-24 20:18:50.452135] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee840) on tqpair=0x88e540 00:24:46.743 [2024-07-24 20:18:50.452156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.743 [2024-07-24 20:18:50.452167] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.452176] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88e540) 00:24:46.744 [2024-07-24 20:18:50.452191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.744 [2024-07-24 20:18:50.452227] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee840, cid 3, qid 0 00:24:46.744 [2024-07-24 20:18:50.452424] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.744 [2024-07-24 20:18:50.452454] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.744 [2024-07-24 20:18:50.452464] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.452473] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee840) on tqpair=0x88e540 00:24:46.744 [2024-07-24 20:18:50.452484] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:46.744 [2024-07-24 20:18:50.452496] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:46.744 [2024-07-24 20:18:50.452518] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.452530] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.452539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88e540) 00:24:46.744 [2024-07-24 20:18:50.452554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.744 [2024-07-24 20:18:50.452583] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee840, cid 3, qid 0 00:24:46.744 [2024-07-24 20:18:50.452770] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.744 [2024-07-24 20:18:50.452790] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.744 [2024-07-24 20:18:50.452799] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.452808] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee840) on tqpair=0x88e540 00:24:46.744 [2024-07-24 20:18:50.452832] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.452845] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.452854] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88e540) 00:24:46.744 [2024-07-24 20:18:50.452868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.744 [2024-07-24 20:18:50.452897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee840, cid 3, qid 0 00:24:46.744 [2024-07-24 20:18:50.453082] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.744 [2024-07-24 20:18:50.453103] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.744 [2024-07-24 20:18:50.453112] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.453121] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee840) on tqpair=0x88e540 00:24:46.744 [2024-07-24 20:18:50.453144] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.453156] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.453165] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88e540) 00:24:46.744 [2024-07-24 20:18:50.453179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.744 [2024-07-24 20:18:50.453207] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee840, cid 3, qid 0 00:24:46.744 [2024-07-24 20:18:50.453372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.744 [2024-07-24 20:18:50.453392] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.744 [2024-07-24 20:18:50.453407] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.453418] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee840) on tqpair=0x88e540 00:24:46.744 [2024-07-24 20:18:50.453448] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.453463] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.453472] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88e540) 00:24:46.744 [2024-07-24 20:18:50.453486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.744 [2024-07-24 20:18:50.453515] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee840, cid 3, qid 0 00:24:46.744 [2024-07-24 20:18:50.453682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.744 [2024-07-24 20:18:50.453702] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.744 [2024-07-24 20:18:50.453711] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.453720] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee840) on tqpair=0x88e540 00:24:46.744 [2024-07-24 20:18:50.453743] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.453755] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.453764] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88e540) 00:24:46.744 [2024-07-24 20:18:50.453778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.744 [2024-07-24 20:18:50.453806] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee840, cid 3, qid 0 00:24:46.744 [2024-07-24 20:18:50.453989] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.744 [2024-07-24 20:18:50.454009] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.744 [2024-07-24 20:18:50.454019] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.454028] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee840) on tqpair=0x88e540 00:24:46.744 [2024-07-24 20:18:50.454050] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.454062] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.454071] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88e540) 00:24:46.744 [2024-07-24 20:18:50.454085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.744 [2024-07-24 20:18:50.454114] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee840, cid 3, qid 0 00:24:46.744 [2024-07-24 20:18:50.454299] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.744 [2024-07-24 20:18:50.454320] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.744 [2024-07-24 20:18:50.454330] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.454339] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee840) on tqpair=0x88e540 00:24:46.744 [2024-07-24 20:18:50.454361] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.454374] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.454383] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88e540) 00:24:46.744 [2024-07-24 20:18:50.454397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.744 [2024-07-24 20:18:50.454426] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee840, cid 3, qid 0 00:24:46.744 [2024-07-24 20:18:50.454593] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.744 [2024-07-24 20:18:50.454613] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.744 [2024-07-24 20:18:50.454622] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.454631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee840) on tqpair=0x88e540 00:24:46.744 [2024-07-24 20:18:50.454659] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.454672] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.744 [2024-07-24 20:18:50.454681] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88e540) 00:24:46.744 [2024-07-24 20:18:50.454695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.744 [2024-07-24 20:18:50.454724] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee840, cid 3, qid 0 00:24:46.744 [2024-07-24 20:18:50.454888] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.745 [2024-07-24 20:18:50.454904] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.745 [2024-07-24 20:18:50.454914] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.745 [2024-07-24 20:18:50.454923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee840) on tqpair=0x88e540 00:24:46.745 [2024-07-24 20:18:50.454944] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.745 [2024-07-24 20:18:50.454956] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.745 [2024-07-24 20:18:50.454965] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88e540) 00:24:46.745 [2024-07-24 20:18:50.454979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.745 [2024-07-24 20:18:50.455007] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee840, cid 3, qid 0 00:24:46.745 [2024-07-24 20:18:50.455171] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.745 [2024-07-24 20:18:50.455192] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.745 [2024-07-24 20:18:50.455201] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.745 [2024-07-24 20:18:50.455210] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee840) on tqpair=0x88e540 00:24:46.745 [2024-07-24 20:18:50.455232] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.745 [2024-07-24 20:18:50.455244] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.745 [2024-07-24 20:18:50.455253] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88e540) 00:24:46.745 [2024-07-24 20:18:50.455267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.745 [2024-07-24 20:18:50.455295] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee840, cid 3, qid 0 00:24:46.745 [2024-07-24 20:18:50.459448] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.745 [2024-07-24 20:18:50.459471] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.745 [2024-07-24 20:18:50.459480] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.745 [2024-07-24 20:18:50.459489] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee840) on tqpair=0x88e540 00:24:46.745 [2024-07-24 20:18:50.459513] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.745 [2024-07-24 20:18:50.459526] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.745 [2024-07-24 20:18:50.459534] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88e540) 00:24:46.745 [2024-07-24 20:18:50.459549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.745 [2024-07-24 20:18:50.459579] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ee840, cid 3, qid 0 00:24:46.745 [2024-07-24 20:18:50.459784] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.745 [2024-07-24 20:18:50.459805] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.745 [2024-07-24 20:18:50.459814] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.745 [2024-07-24 20:18:50.459823] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ee840) on tqpair=0x88e540 00:24:46.745 [2024-07-24 20:18:50.459841] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:24:46.745 00:24:46.745 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:46.745 [2024-07-24 20:18:50.513274] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:24:46.745 [2024-07-24 20:18:50.513330] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2114698 ] 00:24:47.006 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.006 [2024-07-24 20:18:50.559864] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:47.006 [2024-07-24 20:18:50.559934] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:47.006 [2024-07-24 20:18:50.559949] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:47.006 [2024-07-24 20:18:50.559967] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:47.006 [2024-07-24 20:18:50.559983] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:47.006 [2024-07-24 20:18:50.563482] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:47.006 [2024-07-24 20:18:50.563536] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c84540 0 00:24:47.006 [2024-07-24 20:18:50.571441] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:47.006 [2024-07-24 20:18:50.571473] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:47.006 [2024-07-24 20:18:50.571485] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:47.006 [2024-07-24 20:18:50.571493] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:47.006 [2024-07-24 20:18:50.571546] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.006 [2024-07-24 20:18:50.571562] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.006 [2024-07-24 20:18:50.571572] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c84540) 00:24:47.006 [2024-07-24 20:18:50.571592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:47.006 [2024-07-24 20:18:50.571629] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce43c0, cid 0, qid 0 00:24:47.006 [2024-07-24 20:18:50.579450] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.006 [2024-07-24 20:18:50.579476] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.006 [2024-07-24 20:18:50.579486] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.006 [2024-07-24 20:18:50.579495] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce43c0) on tqpair=0x1c84540 00:24:47.006 [2024-07-24 20:18:50.579520] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:47.006 [2024-07-24 20:18:50.579535] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:47.006 [2024-07-24 20:18:50.579548] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:47.006 [2024-07-24 20:18:50.579576] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.006 [2024-07-24 20:18:50.579588] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.006 [2024-07-24 20:18:50.579597] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c84540) 00:24:47.006 [2024-07-24 20:18:50.579613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.006 [2024-07-24 20:18:50.579652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce43c0, cid 0, qid 0 00:24:47.006 [2024-07-24 20:18:50.579836] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.006 [2024-07-24 20:18:50.579853] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.006 [2024-07-24 20:18:50.579862] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.006 [2024-07-24 20:18:50.579872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce43c0) on tqpair=0x1c84540 00:24:47.006 [2024-07-24 20:18:50.579888] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:47.006 [2024-07-24 20:18:50.579908] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:47.006 [2024-07-24 20:18:50.579925] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.006 [2024-07-24 20:18:50.579935] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.006 [2024-07-24 20:18:50.579944] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c84540) 00:24:47.006 [2024-07-24 20:18:50.579959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.006 [2024-07-24 20:18:50.579989] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce43c0, cid 0, qid 0 00:24:47.006 [2024-07-24 20:18:50.580179] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.006 [2024-07-24 20:18:50.580201] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.006 [2024-07-24 20:18:50.580210] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.006 [2024-07-24 20:18:50.580219] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce43c0) on tqpair=0x1c84540 00:24:47.006 [2024-07-24 20:18:50.580231] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:47.007 [2024-07-24 20:18:50.580252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:47.007 [2024-07-24 20:18:50.580269] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.580290] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.580299] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c84540) 00:24:47.007 [2024-07-24 20:18:50.580313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.007 [2024-07-24 20:18:50.580343] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce43c0, cid 0, qid 0 00:24:47.007 [2024-07-24 20:18:50.580563] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.007 [2024-07-24 20:18:50.580584] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.007 [2024-07-24 20:18:50.580594] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.580603] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce43c0) on tqpair=0x1c84540 00:24:47.007 [2024-07-24 20:18:50.580614] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:47.007 [2024-07-24 20:18:50.580637] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.580649] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.580658] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c84540) 00:24:47.007 [2024-07-24 20:18:50.580672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.007 [2024-07-24 20:18:50.580702] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce43c0, cid 0, qid 0 00:24:47.007 [2024-07-24 20:18:50.580868] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.007 [2024-07-24 20:18:50.580894] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.007 [2024-07-24 20:18:50.580904] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.580914] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce43c0) on tqpair=0x1c84540 00:24:47.007 [2024-07-24 20:18:50.580924] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:47.007 [2024-07-24 20:18:50.580935] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:47.007 [2024-07-24 20:18:50.580954] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:47.007 [2024-07-24 20:18:50.581067] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:47.007 [2024-07-24 20:18:50.581077] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:47.007 [2024-07-24 20:18:50.581095] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.581105] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.581114] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c84540) 00:24:47.007 [2024-07-24 20:18:50.581128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.007 [2024-07-24 20:18:50.581158] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce43c0, cid 0, qid 0 00:24:47.007 [2024-07-24 20:18:50.581377] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.007 [2024-07-24 20:18:50.581397] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.007 [2024-07-24 20:18:50.581407] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.581416] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce43c0) on tqpair=0x1c84540 00:24:47.007 [2024-07-24 20:18:50.581436] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:47.007 [2024-07-24 20:18:50.581461] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.581473] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.581482] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c84540) 00:24:47.007 [2024-07-24 20:18:50.581497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.007 [2024-07-24 20:18:50.581527] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce43c0, cid 0, qid 0 00:24:47.007 [2024-07-24 20:18:50.581713] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.007 [2024-07-24 20:18:50.581733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.007 [2024-07-24 20:18:50.581743] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.581752] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce43c0) on tqpair=0x1c84540 00:24:47.007 [2024-07-24 20:18:50.581761] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:47.007 [2024-07-24 20:18:50.581773] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:47.007 [2024-07-24 20:18:50.581791] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:47.007 [2024-07-24 20:18:50.581810] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:47.007 [2024-07-24 20:18:50.581830] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.581845] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c84540) 00:24:47.007 [2024-07-24 20:18:50.581860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.007 [2024-07-24 20:18:50.581890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce43c0, cid 0, qid 0 00:24:47.007 [2024-07-24 20:18:50.582163] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.007 [2024-07-24 20:18:50.582180] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.007 [2024-07-24 20:18:50.582189] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.582198] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c84540): datao=0, datal=4096, cccid=0 00:24:47.007 [2024-07-24 20:18:50.582209] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ce43c0) on tqpair(0x1c84540): expected_datao=0, payload_size=4096 00:24:47.007 [2024-07-24 20:18:50.582220] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.582244] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.582257] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.626449] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.007 [2024-07-24 20:18:50.626473] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.007 [2024-07-24 20:18:50.626483] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.626493] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce43c0) on tqpair=0x1c84540 00:24:47.007 [2024-07-24 20:18:50.626508] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:47.007 [2024-07-24 20:18:50.626520] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:47.007 [2024-07-24 20:18:50.626531] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:47.007 [2024-07-24 20:18:50.626541] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:47.007 [2024-07-24 20:18:50.626551] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:47.007 [2024-07-24 20:18:50.626563] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:47.007 [2024-07-24 20:18:50.626583] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:47.007 [2024-07-24 20:18:50.626606] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.626619] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.626628] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c84540) 00:24:47.007 [2024-07-24 20:18:50.626644] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:47.007 [2024-07-24 20:18:50.626677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce43c0, cid 0, qid 0 00:24:47.007 [2024-07-24 20:18:50.626897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.007 [2024-07-24 20:18:50.626918] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.007 [2024-07-24 20:18:50.626928] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.626937] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce43c0) on tqpair=0x1c84540 00:24:47.007 [2024-07-24 20:18:50.626952] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.626962] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.626971] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c84540) 00:24:47.007 [2024-07-24 20:18:50.626985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.007 [2024-07-24 20:18:50.627005] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.627016] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.627025] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c84540) 00:24:47.007 [2024-07-24 20:18:50.627037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.007 [2024-07-24 20:18:50.627052] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.627061] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.627070] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c84540) 00:24:47.007 [2024-07-24 20:18:50.627082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.007 [2024-07-24 20:18:50.627095] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.627105] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.007 [2024-07-24 20:18:50.627113] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c84540) 00:24:47.007 [2024-07-24 20:18:50.627125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.007 [2024-07-24 20:18:50.627138] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:47.008 [2024-07-24 20:18:50.627165] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:47.008 [2024-07-24 20:18:50.627184] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.008 [2024-07-24 20:18:50.627194] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c84540) 00:24:47.008 [2024-07-24 20:18:50.627208] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.008 [2024-07-24 20:18:50.627241] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce43c0, cid 0, qid 0 00:24:47.008 [2024-07-24 20:18:50.627256] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4540, cid 1, qid 0 00:24:47.008 [2024-07-24 20:18:50.627267] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce46c0, cid 2, qid 0 00:24:47.008 [2024-07-24 20:18:50.627278] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4840, cid 3, qid 0 00:24:47.008 [2024-07-24 20:18:50.627288] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce49c0, cid 4, qid 0 00:24:47.008 [2024-07-24 20:18:50.627563] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.008 [2024-07-24 20:18:50.627582] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.008 [2024-07-24 20:18:50.627592] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.008 [2024-07-24 20:18:50.627601] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce49c0) on tqpair=0x1c84540 00:24:47.008 [2024-07-24 20:18:50.627612] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:47.008 [2024-07-24 20:18:50.627624] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:47.008 [2024-07-24 20:18:50.627649] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:47.008 [2024-07-24 20:18:50.627667] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:47.008 [2024-07-24 20:18:50.627682] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.008 [2024-07-24 20:18:50.627692] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.008 [2024-07-24 20:18:50.627705] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c84540) 00:24:47.008 [2024-07-24 20:18:50.627721] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:47.008 [2024-07-24 20:18:50.627753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce49c0, cid 4, qid 0 00:24:47.008 [2024-07-24 20:18:50.627985] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.008 [2024-07-24 20:18:50.628006] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.008 [2024-07-24 20:18:50.628016] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.008 [2024-07-24 20:18:50.628025] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce49c0) on tqpair=0x1c84540 00:24:47.008 [2024-07-24 20:18:50.628119] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:47.008 [2024-07-24 20:18:50.628148] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:47.008 [2024-07-24 20:18:50.628168] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.008 [2024-07-24 20:18:50.628179] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c84540) 00:24:47.008 [2024-07-24 20:18:50.628194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.008 [2024-07-24 20:18:50.628224] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce49c0, cid 4, qid 0 00:24:47.008 [2024-07-24 20:18:50.628475] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.008 [2024-07-24 20:18:50.628496] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.008 [2024-07-24 20:18:50.628505] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.008 [2024-07-24 20:18:50.628514] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c84540): datao=0, datal=4096, cccid=4 00:24:47.008 [2024-07-24 20:18:50.628525] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ce49c0) on tqpair(0x1c84540): expected_datao=0, payload_size=4096 00:24:47.008 [2024-07-24 20:18:50.628536] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.008 [2024-07-24 20:18:50.628560] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.008 [2024-07-24 20:18:50.628572] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.008 [2024-07-24 20:18:50.672443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.008 [2024-07-24 20:18:50.672468] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.008 [2024-07-24 20:18:50.672478] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.008 [2024-07-24 20:18:50.672488] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce49c0) on tqpair=0x1c84540 00:24:47.008 [2024-07-24 20:18:50.672511] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:47.008 [2024-07-24 20:18:50.672542] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:47.008 [2024-07-24 20:18:50.672569] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:47.008 [2024-07-24 20:18:50.672588] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.008 [2024-07-24 20:18:50.672599] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c84540) 00:24:47.008 [2024-07-24 20:18:50.672615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.008 [2024-07-24 20:18:50.672647] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce49c0, cid 4, qid 0 00:24:47.008 [2024-07-24 20:18:50.672892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.008 [2024-07-24 20:18:50.672919] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.008 [2024-07-24 20:18:50.672930] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.008 [2024-07-24 20:18:50.672939] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c84540): datao=0, datal=4096, cccid=4 00:24:47.008 [2024-07-24 20:18:50.672949] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ce49c0) on tqpair(0x1c84540): expected_datao=0, payload_size=4096 00:24:47.008 [2024-07-24 20:18:50.672959] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.008 [2024-07-24 20:18:50.672984] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.008 [2024-07-24 20:18:50.672996] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.008 [2024-07-24 20:18:50.713591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.008 [2024-07-24 20:18:50.713616] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.008 [2024-07-24 20:18:50.713627] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.008 [2024-07-24 20:18:50.713636] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce49c0) on tqpair=0x1c84540 00:24:47.008 [2024-07-24 20:18:50.713670] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:47.008 [2024-07-24 20:18:50.713698] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:47.008 [2024-07-24 20:18:50.713718] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.008 [2024-07-24 20:18:50.713729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c84540) 00:24:47.008 [2024-07-24 20:18:50.713745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.008 [2024-07-24 20:18:50.713778] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce49c0, cid 4, qid 0 00:24:47.008 [2024-07-24 20:18:50.713931] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.008 [2024-07-24 20:18:50.713952] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.008 [2024-07-24 20:18:50.713961] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.008 [2024-07-24 20:18:50.713970] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c84540): datao=0, datal=4096, cccid=4 00:24:47.008 [2024-07-24 20:18:50.713980] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ce49c0) on tqpair(0x1c84540): expected_datao=0, payload_size=4096 00:24:47.008 [2024-07-24 20:18:50.713990] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.008 [2024-07-24 20:18:50.714015] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.008 [2024-07-24 20:18:50.714027] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.008 [2024-07-24 20:18:50.754620] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.008 [2024-07-24 20:18:50.754645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.008 [2024-07-24 20:18:50.754655] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.008 [2024-07-24 20:18:50.754664] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce49c0) on tqpair=0x1c84540 00:24:47.008 [2024-07-24 20:18:50.754684] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:47.008 [2024-07-24 20:18:50.754706] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:47.008 [2024-07-24 20:18:50.754728] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:47.008 [2024-07-24 20:18:50.754747] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:47.008 [2024-07-24 20:18:50.754760] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:47.008 [2024-07-24 20:18:50.754777] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:47.008 [2024-07-24 20:18:50.754790] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:47.008 [2024-07-24 20:18:50.754801] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:47.008 [2024-07-24 20:18:50.754813] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:47.008 [2024-07-24 20:18:50.754839] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.754851] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c84540) 00:24:47.009 [2024-07-24 20:18:50.754866] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.009 [2024-07-24 20:18:50.754882] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.754892] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.754900] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c84540) 00:24:47.009 [2024-07-24 20:18:50.754913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.009 [2024-07-24 20:18:50.754950] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce49c0, cid 4, qid 0 00:24:47.009 [2024-07-24 20:18:50.754966] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4b40, cid 5, qid 0 00:24:47.009 [2024-07-24 20:18:50.755160] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.009 [2024-07-24 20:18:50.755176] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.009 [2024-07-24 20:18:50.755185] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.755195] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce49c0) on tqpair=0x1c84540 00:24:47.009 [2024-07-24 20:18:50.755209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.009 [2024-07-24 20:18:50.755221] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.009 [2024-07-24 20:18:50.755230] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.755239] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce4b40) on tqpair=0x1c84540 00:24:47.009 [2024-07-24 20:18:50.755261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.755273] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c84540) 00:24:47.009 [2024-07-24 20:18:50.755287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.009 [2024-07-24 20:18:50.755317] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4b40, cid 5, qid 0 00:24:47.009 [2024-07-24 20:18:50.755546] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.009 [2024-07-24 20:18:50.755568] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.009 [2024-07-24 20:18:50.755577] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.755586] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce4b40) on tqpair=0x1c84540 00:24:47.009 [2024-07-24 20:18:50.755608] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.755619] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c84540) 00:24:47.009 [2024-07-24 20:18:50.755634] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.009 [2024-07-24 20:18:50.755663] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4b40, cid 5, qid 0 00:24:47.009 [2024-07-24 20:18:50.755835] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.009 [2024-07-24 20:18:50.755852] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.009 [2024-07-24 20:18:50.755861] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.755870] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce4b40) on tqpair=0x1c84540 00:24:47.009 [2024-07-24 20:18:50.755892] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.755903] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c84540) 00:24:47.009 [2024-07-24 20:18:50.755918] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.009 [2024-07-24 20:18:50.755947] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4b40, cid 5, qid 0 00:24:47.009 [2024-07-24 20:18:50.756172] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.009 [2024-07-24 20:18:50.756193] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.009 [2024-07-24 20:18:50.756202] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.756211] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce4b40) on tqpair=0x1c84540 00:24:47.009 [2024-07-24 20:18:50.756253] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.756268] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c84540) 00:24:47.009 [2024-07-24 20:18:50.756283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.009 [2024-07-24 20:18:50.756300] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.756311] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c84540) 00:24:47.009 [2024-07-24 20:18:50.756324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.009 [2024-07-24 20:18:50.756340] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.756350] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1c84540) 00:24:47.009 [2024-07-24 20:18:50.756363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.009 [2024-07-24 20:18:50.756379] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.756390] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c84540) 00:24:47.009 [2024-07-24 20:18:50.756403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.009 [2024-07-24 20:18:50.760438] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4b40, cid 5, qid 0 00:24:47.009 [2024-07-24 20:18:50.760459] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce49c0, cid 4, qid 0 00:24:47.009 [2024-07-24 20:18:50.760470] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4cc0, cid 6, qid 0 00:24:47.009 [2024-07-24 20:18:50.760480] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4e40, cid 7, qid 0 00:24:47.009 [2024-07-24 20:18:50.760503] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.009 [2024-07-24 20:18:50.760518] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.009 [2024-07-24 20:18:50.760527] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.760536] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c84540): datao=0, datal=8192, cccid=5 00:24:47.009 [2024-07-24 20:18:50.760547] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ce4b40) on tqpair(0x1c84540): expected_datao=0, payload_size=8192 00:24:47.009 [2024-07-24 20:18:50.760557] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.760576] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.760588] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.760600] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.009 [2024-07-24 20:18:50.760612] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.009 [2024-07-24 20:18:50.760621] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.760629] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c84540): datao=0, datal=512, cccid=4 00:24:47.009 [2024-07-24 20:18:50.760640] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ce49c0) on tqpair(0x1c84540): expected_datao=0, payload_size=512 00:24:47.009 [2024-07-24 20:18:50.760649] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.760662] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.760671] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.760683] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.009 [2024-07-24 20:18:50.760694] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.009 [2024-07-24 20:18:50.760703] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.760711] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c84540): datao=0, datal=512, cccid=6 00:24:47.009 [2024-07-24 20:18:50.760721] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ce4cc0) on tqpair(0x1c84540): expected_datao=0, payload_size=512 00:24:47.009 [2024-07-24 20:18:50.760731] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.760744] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.760753] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.760764] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.009 [2024-07-24 20:18:50.760776] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.009 [2024-07-24 20:18:50.760785] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.760793] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c84540): datao=0, datal=4096, cccid=7 00:24:47.009 [2024-07-24 20:18:50.760804] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ce4e40) on tqpair(0x1c84540): expected_datao=0, payload_size=4096 00:24:47.009 [2024-07-24 20:18:50.760814] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.760826] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.760836] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.760848] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.009 [2024-07-24 20:18:50.760859] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.009 [2024-07-24 20:18:50.760868] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.009 [2024-07-24 20:18:50.760877] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce4b40) on tqpair=0x1c84540 00:24:47.009 [2024-07-24 20:18:50.760903] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.009 [2024-07-24 20:18:50.760918] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.009 [2024-07-24 20:18:50.760927] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.010 [2024-07-24 20:18:50.760936] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce49c0) on tqpair=0x1c84540 00:24:47.010 [2024-07-24 20:18:50.760957] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.010 [2024-07-24 20:18:50.760971] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.010 [2024-07-24 20:18:50.760980] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.010 [2024-07-24 20:18:50.760988] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce4cc0) on tqpair=0x1c84540 00:24:47.010 [2024-07-24 20:18:50.761003] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.010 [2024-07-24 20:18:50.761019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.010 [2024-07-24 20:18:50.761029] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.010 [2024-07-24 20:18:50.761038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce4e40) on tqpair=0x1c84540 00:24:47.010 ===================================================== 00:24:47.010 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:47.010 ===================================================== 00:24:47.010 Controller Capabilities/Features 00:24:47.010 ================================ 00:24:47.010 Vendor ID: 8086 00:24:47.010 Subsystem Vendor ID: 8086 00:24:47.010 Serial Number: SPDK00000000000001 00:24:47.010 Model Number: SPDK bdev Controller 00:24:47.010 Firmware Version: 24.09 00:24:47.010 Recommended Arb Burst: 6 00:24:47.010 IEEE OUI Identifier: e4 d2 5c 00:24:47.010 Multi-path I/O 00:24:47.010 May have multiple subsystem ports: Yes 00:24:47.010 May have multiple controllers: Yes 00:24:47.010 Associated with SR-IOV VF: No 00:24:47.010 Max Data Transfer Size: 131072 00:24:47.010 Max Number of Namespaces: 32 00:24:47.010 Max Number of I/O Queues: 127 00:24:47.010 NVMe Specification Version (VS): 1.3 00:24:47.010 NVMe Specification Version (Identify): 1.3 00:24:47.010 Maximum Queue Entries: 128 00:24:47.010 Contiguous Queues Required: Yes 00:24:47.010 Arbitration Mechanisms Supported 00:24:47.010 Weighted Round Robin: Not Supported 00:24:47.010 Vendor Specific: Not Supported 00:24:47.010 Reset Timeout: 15000 ms 00:24:47.010 Doorbell Stride: 4 bytes 00:24:47.010 NVM Subsystem Reset: Not Supported 00:24:47.010 Command Sets Supported 00:24:47.010 NVM Command Set: Supported 00:24:47.010 Boot Partition: Not Supported 00:24:47.010 Memory Page Size Minimum: 4096 bytes 00:24:47.010 Memory Page Size Maximum: 4096 bytes 00:24:47.010 Persistent Memory Region: Not Supported 00:24:47.010 Optional Asynchronous Events Supported 00:24:47.010 Namespace Attribute Notices: Supported 00:24:47.010 Firmware Activation Notices: Not Supported 00:24:47.010 ANA Change Notices: Not Supported 00:24:47.010 PLE Aggregate Log Change Notices: Not Supported 00:24:47.010 LBA Status Info Alert Notices: Not Supported 00:24:47.010 EGE Aggregate Log Change Notices: Not Supported 00:24:47.010 Normal NVM Subsystem Shutdown event: Not Supported 00:24:47.010 Zone Descriptor Change Notices: Not Supported 00:24:47.010 Discovery Log Change Notices: Not Supported 00:24:47.010 Controller Attributes 00:24:47.010 128-bit Host Identifier: Supported 00:24:47.010 Non-Operational Permissive Mode: Not Supported 00:24:47.010 NVM Sets: Not Supported 00:24:47.010 Read Recovery Levels: Not Supported 00:24:47.010 Endurance Groups: Not Supported 00:24:47.010 Predictable Latency Mode: Not Supported 00:24:47.010 Traffic Based Keep ALive: Not Supported 00:24:47.010 Namespace Granularity: Not Supported 00:24:47.010 SQ Associations: Not Supported 00:24:47.010 UUID List: Not Supported 00:24:47.010 Multi-Domain Subsystem: Not Supported 00:24:47.010 Fixed Capacity Management: Not Supported 00:24:47.010 Variable Capacity Management: Not Supported 00:24:47.010 Delete Endurance Group: Not Supported 00:24:47.010 Delete NVM Set: Not Supported 00:24:47.010 Extended LBA Formats Supported: Not Supported 00:24:47.010 Flexible Data Placement Supported: Not Supported 00:24:47.010 00:24:47.010 Controller Memory Buffer Support 00:24:47.010 ================================ 00:24:47.010 Supported: No 00:24:47.010 00:24:47.010 Persistent Memory Region Support 00:24:47.010 ================================ 00:24:47.010 Supported: No 00:24:47.010 00:24:47.010 Admin Command Set Attributes 00:24:47.010 ============================ 00:24:47.010 Security Send/Receive: Not Supported 00:24:47.010 Format NVM: Not Supported 00:24:47.010 Firmware Activate/Download: Not Supported 00:24:47.010 Namespace Management: Not Supported 00:24:47.010 Device Self-Test: Not Supported 00:24:47.010 Directives: Not Supported 00:24:47.010 NVMe-MI: Not Supported 00:24:47.010 Virtualization Management: Not Supported 00:24:47.010 Doorbell Buffer Config: Not Supported 00:24:47.010 Get LBA Status Capability: Not Supported 00:24:47.010 Command & Feature Lockdown Capability: Not Supported 00:24:47.010 Abort Command Limit: 4 00:24:47.010 Async Event Request Limit: 4 00:24:47.010 Number of Firmware Slots: N/A 00:24:47.010 Firmware Slot 1 Read-Only: N/A 00:24:47.010 Firmware Activation Without Reset: N/A 00:24:47.010 Multiple Update Detection Support: N/A 00:24:47.010 Firmware Update Granularity: No Information Provided 00:24:47.010 Per-Namespace SMART Log: No 00:24:47.010 Asymmetric Namespace Access Log Page: Not Supported 00:24:47.010 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:47.010 Command Effects Log Page: Supported 00:24:47.010 Get Log Page Extended Data: Supported 00:24:47.010 Telemetry Log Pages: Not Supported 00:24:47.010 Persistent Event Log Pages: Not Supported 00:24:47.010 Supported Log Pages Log Page: May Support 00:24:47.010 Commands Supported & Effects Log Page: Not Supported 00:24:47.010 Feature Identifiers & Effects Log Page:May Support 00:24:47.010 NVMe-MI Commands & Effects Log Page: May Support 00:24:47.010 Data Area 4 for Telemetry Log: Not Supported 00:24:47.010 Error Log Page Entries Supported: 128 00:24:47.010 Keep Alive: Supported 00:24:47.010 Keep Alive Granularity: 10000 ms 00:24:47.010 00:24:47.010 NVM Command Set Attributes 00:24:47.010 ========================== 00:24:47.010 Submission Queue Entry Size 00:24:47.010 Max: 64 00:24:47.010 Min: 64 00:24:47.010 Completion Queue Entry Size 00:24:47.010 Max: 16 00:24:47.010 Min: 16 00:24:47.010 Number of Namespaces: 32 00:24:47.010 Compare Command: Supported 00:24:47.010 Write Uncorrectable Command: Not Supported 00:24:47.010 Dataset Management Command: Supported 00:24:47.010 Write Zeroes Command: Supported 00:24:47.010 Set Features Save Field: Not Supported 00:24:47.010 Reservations: Supported 00:24:47.010 Timestamp: Not Supported 00:24:47.010 Copy: Supported 00:24:47.010 Volatile Write Cache: Present 00:24:47.010 Atomic Write Unit (Normal): 1 00:24:47.010 Atomic Write Unit (PFail): 1 00:24:47.010 Atomic Compare & Write Unit: 1 00:24:47.010 Fused Compare & Write: Supported 00:24:47.010 Scatter-Gather List 00:24:47.010 SGL Command Set: Supported 00:24:47.010 SGL Keyed: Supported 00:24:47.010 SGL Bit Bucket Descriptor: Not Supported 00:24:47.010 SGL Metadata Pointer: Not Supported 00:24:47.010 Oversized SGL: Not Supported 00:24:47.010 SGL Metadata Address: Not Supported 00:24:47.010 SGL Offset: Supported 00:24:47.010 Transport SGL Data Block: Not Supported 00:24:47.010 Replay Protected Memory Block: Not Supported 00:24:47.010 00:24:47.010 Firmware Slot Information 00:24:47.010 ========================= 00:24:47.010 Active slot: 1 00:24:47.010 Slot 1 Firmware Revision: 24.09 00:24:47.010 00:24:47.010 00:24:47.010 Commands Supported and Effects 00:24:47.010 ============================== 00:24:47.010 Admin Commands 00:24:47.010 -------------- 00:24:47.010 Get Log Page (02h): Supported 00:24:47.010 Identify (06h): Supported 00:24:47.010 Abort (08h): Supported 00:24:47.010 Set Features (09h): Supported 00:24:47.010 Get Features (0Ah): Supported 00:24:47.010 Asynchronous Event Request (0Ch): Supported 00:24:47.010 Keep Alive (18h): Supported 00:24:47.010 I/O Commands 00:24:47.010 ------------ 00:24:47.010 Flush (00h): Supported LBA-Change 00:24:47.010 Write (01h): Supported LBA-Change 00:24:47.010 Read (02h): Supported 00:24:47.010 Compare (05h): Supported 00:24:47.010 Write Zeroes (08h): Supported LBA-Change 00:24:47.010 Dataset Management (09h): Supported LBA-Change 00:24:47.010 Copy (19h): Supported LBA-Change 00:24:47.010 00:24:47.010 Error Log 00:24:47.010 ========= 00:24:47.010 00:24:47.010 Arbitration 00:24:47.010 =========== 00:24:47.010 Arbitration Burst: 1 00:24:47.010 00:24:47.010 Power Management 00:24:47.010 ================ 00:24:47.010 Number of Power States: 1 00:24:47.010 Current Power State: Power State #0 00:24:47.010 Power State #0: 00:24:47.010 Max Power: 0.00 W 00:24:47.010 Non-Operational State: Operational 00:24:47.010 Entry Latency: Not Reported 00:24:47.011 Exit Latency: Not Reported 00:24:47.011 Relative Read Throughput: 0 00:24:47.011 Relative Read Latency: 0 00:24:47.011 Relative Write Throughput: 0 00:24:47.011 Relative Write Latency: 0 00:24:47.011 Idle Power: Not Reported 00:24:47.011 Active Power: Not Reported 00:24:47.011 Non-Operational Permissive Mode: Not Supported 00:24:47.011 00:24:47.011 Health Information 00:24:47.011 ================== 00:24:47.011 Critical Warnings: 00:24:47.011 Available Spare Space: OK 00:24:47.011 Temperature: OK 00:24:47.011 Device Reliability: OK 00:24:47.011 Read Only: No 00:24:47.011 Volatile Memory Backup: OK 00:24:47.011 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:47.011 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:47.011 Available Spare: 0% 00:24:47.011 Available Spare Threshold: 0% 00:24:47.011 Life Percentage Used:[2024-07-24 20:18:50.761199] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.011 [2024-07-24 20:18:50.761216] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c84540) 00:24:47.011 [2024-07-24 20:18:50.761231] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.011 [2024-07-24 20:18:50.761264] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4e40, cid 7, qid 0 00:24:47.011 [2024-07-24 20:18:50.761540] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.011 [2024-07-24 20:18:50.761559] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.011 [2024-07-24 20:18:50.761568] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.011 [2024-07-24 20:18:50.761577] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce4e40) on tqpair=0x1c84540 00:24:47.011 [2024-07-24 20:18:50.761637] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:47.011 [2024-07-24 20:18:50.761663] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce43c0) on tqpair=0x1c84540 00:24:47.011 [2024-07-24 20:18:50.761677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.011 [2024-07-24 20:18:50.761689] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce4540) on tqpair=0x1c84540 00:24:47.011 [2024-07-24 20:18:50.761700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.011 [2024-07-24 20:18:50.761711] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce46c0) on tqpair=0x1c84540 00:24:47.011 [2024-07-24 20:18:50.761721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.011 [2024-07-24 20:18:50.761732] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce4840) on tqpair=0x1c84540 00:24:47.011 [2024-07-24 20:18:50.761742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.011 [2024-07-24 20:18:50.761760] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.011 [2024-07-24 20:18:50.761771] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.011 [2024-07-24 20:18:50.761780] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c84540) 00:24:47.011 [2024-07-24 20:18:50.761794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.011 [2024-07-24 20:18:50.761825] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4840, cid 3, qid 0 00:24:47.011 [2024-07-24 20:18:50.762039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.011 [2024-07-24 20:18:50.762055] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.011 [2024-07-24 20:18:50.762064] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.011 [2024-07-24 20:18:50.762074] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce4840) on tqpair=0x1c84540 00:24:47.011 [2024-07-24 20:18:50.762089] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.011 [2024-07-24 20:18:50.762099] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.011 [2024-07-24 20:18:50.762107] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c84540) 00:24:47.011 [2024-07-24 20:18:50.762122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.011 [2024-07-24 20:18:50.762166] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4840, cid 3, qid 0 00:24:47.011 [2024-07-24 20:18:50.762392] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.011 [2024-07-24 20:18:50.762412] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.011 [2024-07-24 20:18:50.762422] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.011 [2024-07-24 20:18:50.762440] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce4840) on tqpair=0x1c84540 00:24:47.011 [2024-07-24 20:18:50.762451] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:47.011 [2024-07-24 20:18:50.762461] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:47.011 [2024-07-24 20:18:50.762484] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.011 [2024-07-24 20:18:50.762496] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.011 [2024-07-24 20:18:50.762505] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c84540) 00:24:47.011 [2024-07-24 20:18:50.762519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.011 [2024-07-24 20:18:50.762548] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4840, cid 3, qid 0 00:24:47.011 [2024-07-24 20:18:50.762762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.011 [2024-07-24 20:18:50.762778] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.011 [2024-07-24 20:18:50.762787] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.011 [2024-07-24 20:18:50.762796] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce4840) on tqpair=0x1c84540 00:24:47.011 [2024-07-24 20:18:50.762818] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.011 [2024-07-24 20:18:50.762830] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.011 [2024-07-24 20:18:50.762839] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c84540) 00:24:47.011 [2024-07-24 20:18:50.762853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.011 [2024-07-24 20:18:50.762881] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4840, cid 3, qid 0 00:24:47.011 [2024-07-24 20:18:50.763050] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.011 [2024-07-24 20:18:50.763070] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.011 [2024-07-24 20:18:50.763080] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.011 [2024-07-24 20:18:50.763089] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce4840) on tqpair=0x1c84540 00:24:47.011 [2024-07-24 20:18:50.763111] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.011 [2024-07-24 20:18:50.763124] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.011 [2024-07-24 20:18:50.763133] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c84540) 00:24:47.011 [2024-07-24 20:18:50.763147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.011 [2024-07-24 20:18:50.763175] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4840, cid 3, qid 0 00:24:47.011 [2024-07-24 20:18:50.763388] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.011 [2024-07-24 20:18:50.763404] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.011 [2024-07-24 20:18:50.763413] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.011 [2024-07-24 20:18:50.763422] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce4840) on tqpair=0x1c84540 00:24:47.011 [2024-07-24 20:18:50.763455] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.012 [2024-07-24 20:18:50.763469] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.012 [2024-07-24 20:18:50.763478] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c84540) 00:24:47.012 [2024-07-24 20:18:50.763492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.012 [2024-07-24 20:18:50.763526] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4840, cid 3, qid 0 00:24:47.012 [2024-07-24 20:18:50.763695] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.012 [2024-07-24 20:18:50.763715] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.012 [2024-07-24 20:18:50.763724] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.012 [2024-07-24 20:18:50.763733] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce4840) on tqpair=0x1c84540 00:24:47.012 [2024-07-24 20:18:50.763756] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.012 [2024-07-24 20:18:50.763768] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.012 [2024-07-24 20:18:50.763777] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c84540) 00:24:47.012 [2024-07-24 20:18:50.763791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.012 [2024-07-24 20:18:50.763820] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4840, cid 3, qid 0 00:24:47.012 [2024-07-24 20:18:50.763982] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.012 [2024-07-24 20:18:50.763998] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.012 [2024-07-24 20:18:50.764007] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.012 [2024-07-24 20:18:50.764016] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce4840) on tqpair=0x1c84540 00:24:47.012 [2024-07-24 20:18:50.764038] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.012 [2024-07-24 20:18:50.764050] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.012 [2024-07-24 20:18:50.764058] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c84540) 00:24:47.012 [2024-07-24 20:18:50.764073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.012 [2024-07-24 20:18:50.764108] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4840, cid 3, qid 0 00:24:47.012 [2024-07-24 20:18:50.764372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.012 [2024-07-24 20:18:50.764387] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.012 [2024-07-24 20:18:50.764396] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.012 [2024-07-24 20:18:50.764405] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce4840) on tqpair=0x1c84540 00:24:47.012 [2024-07-24 20:18:50.764426] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.012 [2024-07-24 20:18:50.768454] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.012 [2024-07-24 20:18:50.768463] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c84540) 00:24:47.012 [2024-07-24 20:18:50.768478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.012 [2024-07-24 20:18:50.768510] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ce4840, cid 3, qid 0 00:24:47.012 [2024-07-24 20:18:50.768710] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.012 [2024-07-24 20:18:50.768730] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.012 [2024-07-24 20:18:50.768740] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.012 [2024-07-24 20:18:50.768749] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ce4840) on tqpair=0x1c84540 00:24:47.012 [2024-07-24 20:18:50.768768] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:24:47.012 0% 00:24:47.012 Data Units Read: 0 00:24:47.012 Data Units Written: 0 00:24:47.012 Host Read Commands: 0 00:24:47.012 Host Write Commands: 0 00:24:47.012 Controller Busy Time: 0 minutes 00:24:47.012 Power Cycles: 0 00:24:47.012 Power On Hours: 0 hours 00:24:47.012 Unsafe Shutdowns: 0 00:24:47.012 Unrecoverable Media Errors: 0 00:24:47.012 Lifetime Error Log Entries: 0 00:24:47.012 Warning Temperature Time: 0 minutes 00:24:47.012 Critical Temperature Time: 0 minutes 00:24:47.012 00:24:47.012 Number of Queues 00:24:47.012 ================ 00:24:47.012 Number of I/O Submission Queues: 127 00:24:47.012 Number of I/O Completion Queues: 127 00:24:47.012 00:24:47.012 Active Namespaces 00:24:47.012 ================= 00:24:47.012 Namespace ID:1 00:24:47.012 Error Recovery Timeout: Unlimited 00:24:47.012 Command Set Identifier: NVM (00h) 00:24:47.012 Deallocate: Supported 00:24:47.012 Deallocated/Unwritten Error: Not Supported 00:24:47.012 Deallocated Read Value: Unknown 00:24:47.012 Deallocate in Write Zeroes: Not Supported 00:24:47.012 Deallocated Guard Field: 0xFFFF 00:24:47.012 Flush: Supported 00:24:47.012 Reservation: Supported 00:24:47.012 Namespace Sharing Capabilities: Multiple Controllers 00:24:47.012 Size (in LBAs): 131072 (0GiB) 00:24:47.012 Capacity (in LBAs): 131072 (0GiB) 00:24:47.012 Utilization (in LBAs): 131072 (0GiB) 00:24:47.012 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:47.012 EUI64: ABCDEF0123456789 00:24:47.012 UUID: 4b5f46d9-daa5-4440-ac83-9407ed5e15a5 00:24:47.012 Thin Provisioning: Not Supported 00:24:47.012 Per-NS Atomic Units: Yes 00:24:47.012 Atomic Boundary Size (Normal): 0 00:24:47.012 Atomic Boundary Size (PFail): 0 00:24:47.012 Atomic Boundary Offset: 0 00:24:47.012 Maximum Single Source Range Length: 65535 00:24:47.012 Maximum Copy Length: 65535 00:24:47.012 Maximum Source Range Count: 1 00:24:47.012 NGUID/EUI64 Never Reused: No 00:24:47.012 Namespace Write Protected: No 00:24:47.012 Number of LBA Formats: 1 00:24:47.012 Current LBA Format: LBA Format #00 00:24:47.012 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:47.012 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:47.270 rmmod nvme_tcp 00:24:47.270 rmmod nvme_fabrics 00:24:47.270 rmmod nvme_keyring 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2114422 ']' 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2114422 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 2114422 ']' 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 2114422 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2114422 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2114422' 00:24:47.270 killing process with pid 2114422 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 2114422 00:24:47.270 20:18:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 2114422 00:24:47.837 20:18:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:47.837 20:18:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:47.837 20:18:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:47.837 20:18:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:47.837 20:18:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:47.837 20:18:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.837 20:18:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.837 20:18:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.739 20:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:49.739 00:24:49.739 real 0m7.622s 00:24:49.739 user 0m10.128s 00:24:49.739 sys 0m2.706s 00:24:49.739 20:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:49.739 20:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:49.739 ************************************ 00:24:49.739 END TEST nvmf_identify 00:24:49.739 ************************************ 00:24:49.739 20:18:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:49.739 20:18:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:49.739 20:18:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:49.739 20:18:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.739 ************************************ 00:24:49.739 START TEST nvmf_perf 00:24:49.739 ************************************ 00:24:49.739 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:49.998 * Looking for test storage... 00:24:49.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.998 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.999 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.999 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:49.999 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:49.999 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:49.999 20:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:52.530 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:52.530 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:52.530 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:52.530 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:52.530 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:52.530 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:52.530 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:52.531 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:52.531 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:52.531 Found net devices under 0000:84:00.0: cvl_0_0 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:52.531 Found net devices under 0000:84:00.1: cvl_0_1 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:52.531 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:52.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:52.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:24:52.791 00:24:52.791 --- 10.0.0.2 ping statistics --- 00:24:52.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.791 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:52.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:52.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:24:52.791 00:24:52.791 --- 10.0.0.1 ping statistics --- 00:24:52.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.791 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2116765 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2116765 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 2116765 ']' 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:52.791 20:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:52.791 [2024-07-24 20:18:56.538224] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:24:52.791 [2024-07-24 20:18:56.538394] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.049 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.049 [2024-07-24 20:18:56.689339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:53.309 [2024-07-24 20:18:56.886695] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:53.309 [2024-07-24 20:18:56.886792] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:53.309 [2024-07-24 20:18:56.886827] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:53.309 [2024-07-24 20:18:56.886857] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:53.309 [2024-07-24 20:18:56.886883] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:53.309 [2024-07-24 20:18:56.887046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.309 [2024-07-24 20:18:56.887105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:53.309 [2024-07-24 20:18:56.887185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:53.309 [2024-07-24 20:18:56.887191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.248 20:18:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:54.248 20:18:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:24:54.248 20:18:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:54.248 20:18:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:54.248 20:18:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:54.248 20:18:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.248 20:18:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:54.248 20:18:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:57.527 20:19:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:57.527 20:19:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:57.784 20:19:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:24:57.784 20:19:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:58.042 20:19:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:58.042 20:19:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:24:58.042 20:19:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:58.042 20:19:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:58.042 20:19:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:58.606 [2024-07-24 20:19:02.100403] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.607 20:19:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:58.864 20:19:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:58.864 20:19:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:59.121 20:19:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:59.121 20:19:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:59.379 20:19:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:59.943 [2024-07-24 20:19:03.472757] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.943 20:19:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:00.507 20:19:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:25:00.507 20:19:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:25:00.507 20:19:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:00.507 20:19:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:25:01.878 Initializing NVMe Controllers 00:25:01.878 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:25:01.878 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:25:01.878 Initialization complete. Launching workers. 00:25:01.878 ======================================================== 00:25:01.878 Latency(us) 00:25:01.878 Device Information : IOPS MiB/s Average min max 00:25:01.878 PCIE (0000:82:00.0) NSID 1 from core 0: 62160.38 242.81 514.21 60.93 8357.51 00:25:01.878 ======================================================== 00:25:01.878 Total : 62160.38 242.81 514.21 60.93 8357.51 00:25:01.878 00:25:01.878 20:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:01.878 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.249 Initializing NVMe Controllers 00:25:03.249 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:03.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:03.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:03.249 Initialization complete. Launching workers. 00:25:03.249 ======================================================== 00:25:03.249 Latency(us) 00:25:03.249 Device Information : IOPS MiB/s Average min max 00:25:03.249 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 94.66 0.37 10585.13 224.39 45823.09 00:25:03.249 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.80 0.22 17744.85 7937.37 47892.34 00:25:03.249 ======================================================== 00:25:03.249 Total : 151.46 0.59 13270.02 224.39 47892.34 00:25:03.249 00:25:03.249 20:19:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:03.249 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.182 Initializing NVMe Controllers 00:25:04.182 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:04.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:04.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:04.182 Initialization complete. Launching workers. 00:25:04.182 ======================================================== 00:25:04.182 Latency(us) 00:25:04.182 Device Information : IOPS MiB/s Average min max 00:25:04.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6266.65 24.48 5107.72 870.71 9815.56 00:25:04.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3786.29 14.79 8479.65 5902.02 17383.20 00:25:04.182 ======================================================== 00:25:04.182 Total : 10052.93 39.27 6377.70 870.71 17383.20 00:25:04.182 00:25:04.182 20:19:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:04.182 20:19:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:04.182 20:19:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:04.183 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.712 Initializing NVMe Controllers 00:25:06.712 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:06.712 Controller IO queue size 128, less than required. 00:25:06.712 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:06.712 Controller IO queue size 128, less than required. 00:25:06.712 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:06.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:06.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:06.712 Initialization complete. Launching workers. 00:25:06.712 ======================================================== 00:25:06.712 Latency(us) 00:25:06.712 Device Information : IOPS MiB/s Average min max 00:25:06.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1136.39 284.10 115193.30 79714.10 201744.97 00:25:06.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 556.45 139.11 243120.86 112756.51 395341.23 00:25:06.712 ======================================================== 00:25:06.712 Total : 1692.84 423.21 157243.91 79714.10 395341.23 00:25:06.712 00:25:06.970 20:19:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:06.970 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.970 No valid NVMe controllers or AIO or URING devices found 00:25:06.970 Initializing NVMe Controllers 00:25:06.970 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:06.970 Controller IO queue size 128, less than required. 00:25:06.970 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:06.970 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:06.970 Controller IO queue size 128, less than required. 00:25:06.970 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:06.970 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:06.970 WARNING: Some requested NVMe devices were skipped 00:25:07.228 20:19:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:07.228 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.761 Initializing NVMe Controllers 00:25:09.761 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:09.761 Controller IO queue size 128, less than required. 00:25:09.761 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:09.761 Controller IO queue size 128, less than required. 00:25:09.761 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:09.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:09.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:09.761 Initialization complete. Launching workers. 00:25:09.761 00:25:09.761 ==================== 00:25:09.761 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:09.761 TCP transport: 00:25:09.761 polls: 6339 00:25:09.761 idle_polls: 3974 00:25:09.761 sock_completions: 2365 00:25:09.761 nvme_completions: 4389 00:25:09.761 submitted_requests: 6602 00:25:09.761 queued_requests: 1 00:25:09.761 00:25:09.761 ==================== 00:25:09.761 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:09.761 TCP transport: 00:25:09.761 polls: 8719 00:25:09.761 idle_polls: 5925 00:25:09.761 sock_completions: 2794 00:25:09.761 nvme_completions: 4639 00:25:09.761 submitted_requests: 6972 00:25:09.761 queued_requests: 1 00:25:09.761 ======================================================== 00:25:09.761 Latency(us) 00:25:09.761 Device Information : IOPS MiB/s Average min max 00:25:09.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1094.73 273.68 119577.93 58595.23 214567.61 00:25:09.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1157.10 289.28 112819.42 54371.34 167293.71 00:25:09.761 ======================================================== 00:25:09.761 Total : 2251.83 562.96 116105.08 54371.34 214567.61 00:25:09.761 00:25:09.761 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:09.761 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:10.021 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:10.021 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:10.021 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:10.021 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:10.021 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:25:10.021 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:10.021 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:25:10.021 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:10.021 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:10.021 rmmod nvme_tcp 00:25:10.021 rmmod nvme_fabrics 00:25:10.021 rmmod nvme_keyring 00:25:10.021 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:10.021 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:25:10.021 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:25:10.021 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2116765 ']' 00:25:10.021 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2116765 00:25:10.021 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 2116765 ']' 00:25:10.021 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 2116765 00:25:10.021 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:25:10.021 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:10.021 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2116765 00:25:10.021 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:10.021 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:10.021 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2116765' 00:25:10.021 killing process with pid 2116765 00:25:10.021 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 2116765 00:25:10.021 20:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 2116765 00:25:11.929 20:19:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:11.929 20:19:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:11.929 20:19:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:11.929 20:19:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:11.929 20:19:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:11.929 20:19:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.929 20:19:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.929 20:19:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.846 20:19:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:13.846 00:25:13.846 real 0m24.122s 00:25:13.846 user 1m15.070s 00:25:13.846 sys 0m6.163s 00:25:13.846 20:19:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:13.846 20:19:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:13.846 ************************************ 00:25:13.846 END TEST nvmf_perf 00:25:13.846 ************************************ 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.107 ************************************ 00:25:14.107 START TEST nvmf_fio_host 00:25:14.107 ************************************ 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:14.107 * Looking for test storage... 00:25:14.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:14.107 20:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:16.643 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:16.643 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:16.643 Found net devices under 0000:84:00.0: cvl_0_0 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:16.643 Found net devices under 0000:84:00.1: cvl_0_1 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:16.643 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:16.644 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:16.644 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:16.644 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:16.644 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:16.644 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:16.644 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:16.644 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:16.644 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:16.644 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:16.644 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:16.644 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:16.644 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:16.644 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:16.644 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:16.644 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:16.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:16.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:25:16.903 00:25:16.903 --- 10.0.0.2 ping statistics --- 00:25:16.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.903 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:16.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:16.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:25:16.903 00:25:16.903 --- 10.0.0.1 ping statistics --- 00:25:16.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.903 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2121001 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2121001 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 2121001 ']' 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:16.903 20:19:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.903 [2024-07-24 20:19:20.600080] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:25:16.903 [2024-07-24 20:19:20.600177] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.903 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.163 [2024-07-24 20:19:20.707136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:17.163 [2024-07-24 20:19:20.915523] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.163 [2024-07-24 20:19:20.915596] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.163 [2024-07-24 20:19:20.915616] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:17.163 [2024-07-24 20:19:20.915634] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:17.163 [2024-07-24 20:19:20.915648] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.163 [2024-07-24 20:19:20.915720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.163 [2024-07-24 20:19:20.915783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:17.163 [2024-07-24 20:19:20.915864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:17.163 [2024-07-24 20:19:20.915871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.136 20:19:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:18.136 20:19:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:25:18.136 20:19:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:18.394 [2024-07-24 20:19:21.995002] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.394 20:19:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:18.394 20:19:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:18.394 20:19:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.394 20:19:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:18.652 Malloc1 00:25:18.652 20:19:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:18.910 20:19:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:19.168 20:19:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:19.737 [2024-07-24 20:19:23.277212] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.737 20:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:20.303 20:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:20.303 20:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:20.303 20:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:20.303 20:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:20.303 20:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:20.303 20:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:20.303 20:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:20.303 20:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:20.303 20:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:20.303 20:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:20.303 20:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:20.303 20:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:20.303 20:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:20.303 20:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:20.303 20:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:20.303 20:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:20.303 20:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:20.303 20:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:20.303 20:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:20.303 20:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:20.303 20:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:20.303 20:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:20.303 20:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:20.571 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:20.571 fio-3.35 00:25:20.571 Starting 1 thread 00:25:20.571 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.098 00:25:23.098 test: (groupid=0, jobs=1): err= 0: pid=2121496: Wed Jul 24 20:19:26 2024 00:25:23.098 read: IOPS=6737, BW=26.3MiB/s (27.6MB/s)(52.8MiB/2008msec) 00:25:23.098 slat (usec): min=2, max=123, avg= 3.25, stdev= 1.53 00:25:23.098 clat (usec): min=3136, max=16839, avg=10410.98, stdev=865.59 00:25:23.098 lat (usec): min=3160, max=16842, avg=10414.23, stdev=865.48 00:25:23.098 clat percentiles (usec): 00:25:23.098 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:25:23.098 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:25:23.098 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:25:23.098 | 99.00th=[12256], 99.50th=[12387], 99.90th=[16057], 99.95th=[16319], 00:25:23.098 | 99.99th=[16712] 00:25:23.098 bw ( KiB/s): min=26064, max=27400, per=99.90%, avg=26924.00, stdev=603.79, samples=4 00:25:23.098 iops : min= 6516, max= 6850, avg=6731.00, stdev=150.95, samples=4 00:25:23.098 write: IOPS=6741, BW=26.3MiB/s (27.6MB/s)(52.9MiB/2008msec); 0 zone resets 00:25:23.098 slat (usec): min=2, max=118, avg= 3.41, stdev= 1.10 00:25:23.098 clat (usec): min=1168, max=16254, avg=8521.36, stdev=744.54 00:25:23.098 lat (usec): min=1175, max=16258, avg=8524.77, stdev=744.49 00:25:23.098 clat percentiles (usec): 00:25:23.098 | 1.00th=[ 6849], 5.00th=[ 7439], 10.00th=[ 7701], 20.00th=[ 7963], 00:25:23.098 | 30.00th=[ 8160], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 8717], 00:25:23.098 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9372], 95.00th=[ 9503], 00:25:23.098 | 99.00th=[10028], 99.50th=[10290], 99.90th=[15008], 99.95th=[15270], 00:25:23.098 | 99.99th=[16188] 00:25:23.098 bw ( KiB/s): min=26688, max=27160, per=99.95%, avg=26952.00, stdev=203.12, samples=4 00:25:23.098 iops : min= 6672, max= 6790, avg=6738.00, stdev=50.78, samples=4 00:25:23.098 lat (msec) : 2=0.01%, 4=0.08%, 10=64.54%, 20=35.37% 00:25:23.098 cpu : usr=66.02%, sys=31.64%, ctx=66, majf=0, minf=39 00:25:23.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:23.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:23.098 issued rwts: total=13529,13536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.098 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:23.098 00:25:23.098 Run status group 0 (all jobs): 00:25:23.098 READ: bw=26.3MiB/s (27.6MB/s), 26.3MiB/s-26.3MiB/s (27.6MB/s-27.6MB/s), io=52.8MiB (55.4MB), run=2008-2008msec 00:25:23.098 WRITE: bw=26.3MiB/s (27.6MB/s), 26.3MiB/s-26.3MiB/s (27.6MB/s-27.6MB/s), io=52.9MiB (55.4MB), run=2008-2008msec 00:25:23.098 20:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:23.098 20:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:23.098 20:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:23.098 20:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:23.098 20:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:23.098 20:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:23.098 20:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:23.098 20:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:23.098 20:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:23.098 20:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:23.098 20:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:23.098 20:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:23.098 20:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:23.098 20:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:23.098 20:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:23.098 20:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:23.098 20:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:23.098 20:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:23.098 20:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:23.098 20:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:23.098 20:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:23.098 20:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:23.098 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:23.098 fio-3.35 00:25:23.098 Starting 1 thread 00:25:23.098 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.628 00:25:25.628 test: (groupid=0, jobs=1): err= 0: pid=2121825: Wed Jul 24 20:19:29 2024 00:25:25.628 read: IOPS=6549, BW=102MiB/s (107MB/s)(205MiB/2007msec) 00:25:25.628 slat (usec): min=3, max=116, avg= 4.46, stdev= 1.52 00:25:25.628 clat (usec): min=3733, max=22094, avg=11324.55, stdev=2362.17 00:25:25.628 lat (usec): min=3738, max=22099, avg=11329.02, stdev=2362.15 00:25:25.628 clat percentiles (usec): 00:25:25.628 | 1.00th=[ 5866], 5.00th=[ 7635], 10.00th=[ 8455], 20.00th=[ 9503], 00:25:25.628 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11207], 60.00th=[11600], 00:25:25.628 | 70.00th=[12256], 80.00th=[13173], 90.00th=[14353], 95.00th=[15533], 00:25:25.628 | 99.00th=[17957], 99.50th=[18482], 99.90th=[20317], 99.95th=[20579], 00:25:25.628 | 99.99th=[20579] 00:25:25.628 bw ( KiB/s): min=44160, max=59584, per=49.43%, avg=51800.00, stdev=7565.88, samples=4 00:25:25.628 iops : min= 2760, max= 3726, avg=3238.00, stdev=473.55, samples=4 00:25:25.628 write: IOPS=3753, BW=58.6MiB/s (61.5MB/s)(107MiB/1818msec); 0 zone resets 00:25:25.628 slat (usec): min=39, max=157, avg=41.04, stdev= 3.21 00:25:25.628 clat (usec): min=6058, max=26112, avg=14879.16, stdev=2558.58 00:25:25.628 lat (usec): min=6098, max=26152, avg=14920.20, stdev=2558.41 00:25:25.628 clat percentiles (usec): 00:25:25.628 | 1.00th=[ 9896], 5.00th=[11076], 10.00th=[11731], 20.00th=[12649], 00:25:25.628 | 30.00th=[13304], 40.00th=[13960], 50.00th=[14746], 60.00th=[15401], 00:25:25.628 | 70.00th=[16057], 80.00th=[17171], 90.00th=[18220], 95.00th=[19006], 00:25:25.628 | 99.00th=[21103], 99.50th=[23200], 99.90th=[25560], 99.95th=[25822], 00:25:25.628 | 99.99th=[26084] 00:25:25.628 bw ( KiB/s): min=46336, max=62080, per=90.26%, avg=54208.00, stdev=7667.37, samples=4 00:25:25.628 iops : min= 2896, max= 3880, avg=3388.00, stdev=479.21, samples=4 00:25:25.628 lat (msec) : 4=0.03%, 10=18.69%, 20=80.19%, 50=1.09% 00:25:25.628 cpu : usr=79.36%, sys=18.44%, ctx=55, majf=0, minf=57 00:25:25.628 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:25.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:25.628 issued rwts: total=13145,6824,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.628 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:25.628 00:25:25.628 Run status group 0 (all jobs): 00:25:25.628 READ: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=205MiB (215MB), run=2007-2007msec 00:25:25.628 WRITE: bw=58.6MiB/s (61.5MB/s), 58.6MiB/s-58.6MiB/s (61.5MB/s-61.5MB/s), io=107MiB (112MB), run=1818-1818msec 00:25:25.628 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:25.886 rmmod nvme_tcp 00:25:25.886 rmmod nvme_fabrics 00:25:25.886 rmmod nvme_keyring 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2121001 ']' 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2121001 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 2121001 ']' 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 2121001 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2121001 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2121001' 00:25:25.886 killing process with pid 2121001 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 2121001 00:25:25.886 20:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 2121001 00:25:26.451 20:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:26.451 20:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:26.451 20:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:26.451 20:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:26.451 20:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:26.452 20:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.452 20:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:26.452 20:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.353 20:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:28.354 00:25:28.354 real 0m14.387s 00:25:28.354 user 0m43.085s 00:25:28.354 sys 0m4.568s 00:25:28.354 20:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:28.354 20:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.354 ************************************ 00:25:28.354 END TEST nvmf_fio_host 00:25:28.354 ************************************ 00:25:28.354 20:19:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:28.354 20:19:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:28.354 20:19:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:28.354 20:19:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.354 ************************************ 00:25:28.354 START TEST nvmf_failover 00:25:28.354 ************************************ 00:25:28.354 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:28.613 * Looking for test storage... 00:25:28.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:25:28.613 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:28.614 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:28.614 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.614 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.614 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.614 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:28.614 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:28.614 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:28.614 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:28.614 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:28.614 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:28.614 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:28.614 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:28.614 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:28.614 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.614 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:28.614 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:28.614 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:28.614 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.614 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.614 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.614 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:28.614 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:28.614 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:25:28.614 20:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:31.901 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:31.901 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:31.901 Found net devices under 0000:84:00.0: cvl_0_0 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:31.901 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:31.902 Found net devices under 0000:84:00.1: cvl_0_1 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:31.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:31.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:25:31.902 00:25:31.902 --- 10.0.0.2 ping statistics --- 00:25:31.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.902 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:31.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:31.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:25:31.902 00:25:31.902 --- 10.0.0.1 ping statistics --- 00:25:31.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.902 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2124169 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2124169 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2124169 ']' 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:31.902 20:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:31.902 [2024-07-24 20:19:35.292036] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:25:31.902 [2024-07-24 20:19:35.292138] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.902 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.902 [2024-07-24 20:19:35.382772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:31.902 [2024-07-24 20:19:35.520566] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.902 [2024-07-24 20:19:35.520635] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.902 [2024-07-24 20:19:35.520655] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:31.902 [2024-07-24 20:19:35.520673] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:31.902 [2024-07-24 20:19:35.520697] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.902 [2024-07-24 20:19:35.520791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.902 [2024-07-24 20:19:35.520857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:31.902 [2024-07-24 20:19:35.520861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.837 20:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:32.837 20:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:32.837 20:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:32.837 20:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:32.837 20:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:32.837 20:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:32.837 20:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:33.095 [2024-07-24 20:19:36.820172] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.095 20:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:34.031 Malloc0 00:25:34.031 20:19:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:34.289 20:19:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:34.854 20:19:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:35.418 [2024-07-24 20:19:39.081330] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.418 20:19:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:35.674 [2024-07-24 20:19:39.454407] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:35.932 20:19:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:36.499 [2024-07-24 20:19:40.024635] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:36.499 20:19:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2124781 00:25:36.499 20:19:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:36.499 20:19:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:36.499 20:19:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2124781 /var/tmp/bdevperf.sock 00:25:36.499 20:19:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2124781 ']' 00:25:36.499 20:19:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:36.499 20:19:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:36.499 20:19:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:36.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:36.499 20:19:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:36.499 20:19:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:36.757 20:19:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:36.757 20:19:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:36.757 20:19:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:37.015 NVMe0n1 00:25:37.273 20:19:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:37.531 00:25:37.531 20:19:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2124974 00:25:37.531 20:19:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:37.531 20:19:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:38.906 20:19:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:38.906 20:19:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:42.190 20:19:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:42.449 00:25:42.708 20:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:42.966 [2024-07-24 20:19:46.726003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf41f0 is same with the state(5) to be set 00:25:42.966 [2024-07-24 20:19:46.726061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf41f0 is same with the state(5) to be set 00:25:42.966 [2024-07-24 20:19:46.726082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf41f0 is same with the state(5) to be set 00:25:42.966 [2024-07-24 20:19:46.726100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf41f0 is same with the state(5) to be set 00:25:42.967 [2024-07-24 20:19:46.726119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf41f0 is same with the state(5) to be set 00:25:42.967 [2024-07-24 20:19:46.726137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf41f0 is same with the state(5) to be set 00:25:42.967 [2024-07-24 20:19:46.726155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf41f0 is same with the state(5) to be set 00:25:42.967 [2024-07-24 20:19:46.726172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf41f0 is same with the state(5) to be set 00:25:42.967 [2024-07-24 20:19:46.726189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf41f0 is same with the state(5) to be set 00:25:42.967 20:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:46.278 20:19:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:46.278 [2024-07-24 20:19:50.036481] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.536 20:19:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:47.473 20:19:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:47.732 [2024-07-24 20:19:51.351662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.351729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.351749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.351768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.351785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.351801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.351819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.351836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.351852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.351869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.351886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.351903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.351921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.351938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.351954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.351973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.351990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.352008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.352026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.352044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.352060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.352077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.352094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.352109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.352126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.352142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.352158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.352188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.732 [2024-07-24 20:19:51.352205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 [2024-07-24 20:19:51.352921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf80 is same with the state(5) to be set 00:25:47.733 20:19:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2124974 00:25:52.998 0 00:25:52.998 20:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2124781 00:25:52.998 20:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2124781 ']' 00:25:52.998 20:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2124781 00:25:52.998 20:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:52.998 20:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:52.998 20:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2124781 00:25:52.998 20:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:52.998 20:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:52.998 20:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2124781' 00:25:52.998 killing process with pid 2124781 00:25:52.998 20:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2124781 00:25:52.998 20:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2124781 00:25:53.266 20:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:53.266 [2024-07-24 20:19:40.100600] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:25:53.266 [2024-07-24 20:19:40.100708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2124781 ] 00:25:53.266 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.266 [2024-07-24 20:19:40.177271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.266 [2024-07-24 20:19:40.317011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.266 Running I/O for 15 seconds... 00:25:53.266 [2024-07-24 20:19:42.606681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.266 [2024-07-24 20:19:42.606769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.266 [2024-07-24 20:19:42.606809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.266 [2024-07-24 20:19:42.606831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.266 [2024-07-24 20:19:42.606854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.266 [2024-07-24 20:19:42.606875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.266 [2024-07-24 20:19:42.606895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.266 [2024-07-24 20:19:42.606915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.266 [2024-07-24 20:19:42.606936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.266 [2024-07-24 20:19:42.606957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.266 [2024-07-24 20:19:42.606979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.266 [2024-07-24 20:19:42.606999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.266 [2024-07-24 20:19:42.607019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.266 [2024-07-24 20:19:42.607040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.266 [2024-07-24 20:19:42.607061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.266 [2024-07-24 20:19:42.607081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.266 [2024-07-24 20:19:42.607103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.266 [2024-07-24 20:19:42.607123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.266 [2024-07-24 20:19:42.607143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.266 [2024-07-24 20:19:42.607162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.266 [2024-07-24 20:19:42.607183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.266 [2024-07-24 20:19:42.607203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.266 [2024-07-24 20:19:42.607238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.266 [2024-07-24 20:19:42.607258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.266 [2024-07-24 20:19:42.607279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.266 [2024-07-24 20:19:42.607298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.266 [2024-07-24 20:19:42.607318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.266 [2024-07-24 20:19:42.607337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.266 [2024-07-24 20:19:42.607358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.266 [2024-07-24 20:19:42.607377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.266 [2024-07-24 20:19:42.607398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.266 [2024-07-24 20:19:42.607417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.266 [2024-07-24 20:19:42.607446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.266 [2024-07-24 20:19:42.607468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.266 [2024-07-24 20:19:42.607490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.266 [2024-07-24 20:19:42.607510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.266 [2024-07-24 20:19:42.607531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.267 [2024-07-24 20:19:42.607550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.607571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.267 [2024-07-24 20:19:42.607591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.607613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.267 [2024-07-24 20:19:42.607633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.607654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.267 [2024-07-24 20:19:42.607673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.607694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.607713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.607734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.607759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.607781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.607800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.607820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.607840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.607861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.607881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.607902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.607921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.607942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.607961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.607982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.608001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.608021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.608040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.608061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.608080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.608100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.608119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.608140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.608159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.608180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.608198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.608219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.608237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.608263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.608284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.608305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.267 [2024-07-24 20:19:42.608325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.608346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.608365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.608386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.608405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.608426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.608456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.608477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.608497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.608518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.608537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.608558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.608577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.608597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.608617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.608638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.608657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.608678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.608697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.608717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.608736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.608756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.608784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.608807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.608828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.608849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.608868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.608888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.608907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.608928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.608948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.608969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.608987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.609007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.609027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.609047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.267 [2024-07-24 20:19:42.609065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.267 [2024-07-24 20:19:42.609085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.609104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.609124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.609143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.609163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.609182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.609202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.609221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.609242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.609260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.609280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.609304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.609325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.609344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.609364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.609384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.609404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.609422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.609453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.609473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.609494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.609512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.609532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.609551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.609572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.609591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.609612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.609631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.609651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.609670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.609690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.609709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.609729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.609747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.609768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.609786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.609812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.609831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.609852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.609870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.609890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.609909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.609929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.609948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.609969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.609988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.610008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.610027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.610047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.610066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.610086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.610104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.610124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.610143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.610163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.610182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.610202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.610222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.610243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.610262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.610284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.610308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.610329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.610348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.610368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.610387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.610407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.610426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.610465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.610485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.610505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.610524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.610544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.610562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.610582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.610600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.610621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.268 [2024-07-24 20:19:42.610642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.268 [2024-07-24 20:19:42.610663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.610681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.610702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.610725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.610747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.610766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.610786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.610805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.610831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.610851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.610872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.610900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.610922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.610941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.610962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.610981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.611001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.611020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.611040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.611059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.611080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.611098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.611119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.611138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.611160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.611178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.611199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.611218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.611238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.611259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.611280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.269 [2024-07-24 20:19:42.611299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.611320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.269 [2024-07-24 20:19:42.611343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.611364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.611384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.611405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.611423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.611457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.611496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.611520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.611539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.611560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.611581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.611602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.611621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.611642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.611661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.611683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.611702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.611722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.611741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.611762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.611781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.611802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.611821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.611842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.611861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.611882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.611906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.611928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.611947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.611968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.269 [2024-07-24 20:19:42.611987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.612007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9ba0 is same with the state(5) to be set 00:25:53.269 [2024-07-24 20:19:42.612030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.269 [2024-07-24 20:19:42.612048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.269 [2024-07-24 20:19:42.612065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63824 len:8 PRP1 0x0 PRP2 0x0 00:25:53.269 [2024-07-24 20:19:42.612083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.612161] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13d9ba0 was disconnected and freed. reset controller. 00:25:53.269 [2024-07-24 20:19:42.612186] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:53.269 [2024-07-24 20:19:42.612233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.269 [2024-07-24 20:19:42.612258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.612278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.269 [2024-07-24 20:19:42.612296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.269 [2024-07-24 20:19:42.612315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.269 [2024-07-24 20:19:42.612333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:42.612352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.270 [2024-07-24 20:19:42.612370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:42.612387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.270 [2024-07-24 20:19:42.616875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.270 [2024-07-24 20:19:42.616927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b3790 (9): Bad file descriptor 00:25:53.270 [2024-07-24 20:19:42.700952] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:53.270 [2024-07-24 20:19:46.725894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.270 [2024-07-24 20:19:46.725979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.726004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.270 [2024-07-24 20:19:46.726039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.726060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.270 [2024-07-24 20:19:46.726079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.726099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.270 [2024-07-24 20:19:46.726118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.726139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3790 is same with the state(5) to be set 00:25:53.270 [2024-07-24 20:19:46.726420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.270 [2024-07-24 20:19:46.726461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.726497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.270 [2024-07-24 20:19:46.726519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.726541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.270 [2024-07-24 20:19:46.726560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.726581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:28944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.270 [2024-07-24 20:19:46.726600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.726631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:28952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.270 [2024-07-24 20:19:46.726650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.726671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:28960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.270 [2024-07-24 20:19:46.726690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.726712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:28968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.270 [2024-07-24 20:19:46.726731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.726752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.270 [2024-07-24 20:19:46.726771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.726794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:28984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.270 [2024-07-24 20:19:46.726813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.726834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.270 [2024-07-24 20:19:46.726860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.726883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.270 [2024-07-24 20:19:46.726902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.726924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.270 [2024-07-24 20:19:46.726942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.726964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:28232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.270 [2024-07-24 20:19:46.726983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.727004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.270 [2024-07-24 20:19:46.727023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.727045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.270 [2024-07-24 20:19:46.727064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.727084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.270 [2024-07-24 20:19:46.727103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.727124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.270 [2024-07-24 20:19:46.727144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.727164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:28272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.270 [2024-07-24 20:19:46.727182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.727203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.270 [2024-07-24 20:19:46.727222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.727243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.270 [2024-07-24 20:19:46.727262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.727283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.270 [2024-07-24 20:19:46.727302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.727324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.270 [2024-07-24 20:19:46.727343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.727369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:28312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.270 [2024-07-24 20:19:46.727389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.727411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.270 [2024-07-24 20:19:46.727439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.727464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:28328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.270 [2024-07-24 20:19:46.727485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.727507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:28336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.270 [2024-07-24 20:19:46.727526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.727547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.270 [2024-07-24 20:19:46.727566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.727587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.270 [2024-07-24 20:19:46.727606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.727627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.270 [2024-07-24 20:19:46.727645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.727667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.270 [2024-07-24 20:19:46.727686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.270 [2024-07-24 20:19:46.727707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.727726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.727747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.727766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.727787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.727806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.727827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.727846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.727867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.727886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.727913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.727933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.727955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.727974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.727995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.728013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.728034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.728052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.728073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.728091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.728111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.728130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.728150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.728168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.728189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.728208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.728228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.728247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.728268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.728286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.728307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.728326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.728348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.728367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.728388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.728412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.728446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.728469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.728490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.728509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.728530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.728549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.728570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.728589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.728609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.728628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.728648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.728667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.728688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.271 [2024-07-24 20:19:46.728707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.728728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.271 [2024-07-24 20:19:46.728746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.728767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.271 [2024-07-24 20:19:46.728786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.728806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:28368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.271 [2024-07-24 20:19:46.728826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.728847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.271 [2024-07-24 20:19:46.728865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.728887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.271 [2024-07-24 20:19:46.728906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.271 [2024-07-24 20:19:46.728933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.271 [2024-07-24 20:19:46.728952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.728973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.728992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.729032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.729072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.729112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.729151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:28440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.729191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:28448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.729230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.729269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.729309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.729348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.729389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.729445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.729489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:28504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.729528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:28512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.729567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.729606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:28528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.729646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.729687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.729727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.729766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.729806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:28568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.729845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.729884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.729923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.729974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.729995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:28600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.730013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.730034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:28608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.730053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.730074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.730093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.730113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.730131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.730152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.730171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.730191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.730210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.730230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.730265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.730287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.730306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.730327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.730347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.730368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.730387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.730407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:28680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.730425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.730455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.730475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.730501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.730521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.272 [2024-07-24 20:19:46.730541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.272 [2024-07-24 20:19:46.730560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.730581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:28712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.730600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.730621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:28720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.730639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.730659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.730678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.730699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.730717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.730738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.730757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.730777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.730795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.730817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.730835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.730855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.730874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.730895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.730922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.730944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.730962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.730990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:28792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.731015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.731037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.731056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.731077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.731096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.731116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.731135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.731157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.731175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.731196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.731214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.731235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.731254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.731274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.731293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.731314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:29232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.273 [2024-07-24 20:19:46.731332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.731352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.273 [2024-07-24 20:19:46.731372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.731392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.731411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.731438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.731459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.731480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.731499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.731525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.731545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.731565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:28888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.731587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.731608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.731628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.731656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:46.731676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.731696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9d80 is same with the state(5) to be set 00:25:53.273 [2024-07-24 20:19:46.731719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.273 [2024-07-24 20:19:46.731736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.273 [2024-07-24 20:19:46.731752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28912 len:8 PRP1 0x0 PRP2 0x0 00:25:53.273 [2024-07-24 20:19:46.731770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:46.731849] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13d9d80 was disconnected and freed. reset controller. 00:25:53.273 [2024-07-24 20:19:46.731874] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:53.273 [2024-07-24 20:19:46.731896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.273 [2024-07-24 20:19:46.736355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.273 [2024-07-24 20:19:46.736410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b3790 (9): Bad file descriptor 00:25:53.273 [2024-07-24 20:19:46.786127] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:53.273 [2024-07-24 20:19:51.354359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:51.354411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:51.354458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:51.354482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:51.354504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:51.354524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:51.354546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:51.354566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:51.354596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:51.354618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:51.354641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:51.354661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:51.354682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.273 [2024-07-24 20:19:51.354700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.273 [2024-07-24 20:19:51.354721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.274 [2024-07-24 20:19:51.354739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.354760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.274 [2024-07-24 20:19:51.354779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.354799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.274 [2024-07-24 20:19:51.354818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.354839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.274 [2024-07-24 20:19:51.354858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.354878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.274 [2024-07-24 20:19:51.354897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.354918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.274 [2024-07-24 20:19:51.354937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.354963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.274 [2024-07-24 20:19:51.354983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.274 [2024-07-24 20:19:51.355023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.274 [2024-07-24 20:19:51.355062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.274 [2024-07-24 20:19:51.355107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.274 [2024-07-24 20:19:51.355148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.274 [2024-07-24 20:19:51.355188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.274 [2024-07-24 20:19:51.355228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.274 [2024-07-24 20:19:51.355267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.274 [2024-07-24 20:19:51.355307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.274 [2024-07-24 20:19:51.355346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.274 [2024-07-24 20:19:51.355385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.274 [2024-07-24 20:19:51.355426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.274 [2024-07-24 20:19:51.355478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.274 [2024-07-24 20:19:51.355519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.274 [2024-07-24 20:19:51.355559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.274 [2024-07-24 20:19:51.355600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.274 [2024-07-24 20:19:51.355646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.274 [2024-07-24 20:19:51.355687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.274 [2024-07-24 20:19:51.355727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.274 [2024-07-24 20:19:51.355770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.274 [2024-07-24 20:19:51.355810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.274 [2024-07-24 20:19:51.355851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.274 [2024-07-24 20:19:51.355891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.274 [2024-07-24 20:19:51.355929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.274 [2024-07-24 20:19:51.355968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.355989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.274 [2024-07-24 20:19:51.356007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.356027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.274 [2024-07-24 20:19:51.356046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.356067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.274 [2024-07-24 20:19:51.356085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.356105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.274 [2024-07-24 20:19:51.356124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.356150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.274 [2024-07-24 20:19:51.356170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.356190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.274 [2024-07-24 20:19:51.356209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.356229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.274 [2024-07-24 20:19:51.356248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.356269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.274 [2024-07-24 20:19:51.356287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.274 [2024-07-24 20:19:51.356308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.356327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.356347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.356366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.356388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.356407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.356434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.356456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.356479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.356498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.356519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.356537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.356557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.356576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.356597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.356616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.356636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.356660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.356681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.356701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.356722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.356740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.356760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.356779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.356799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.356817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.356837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.356856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.356876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.356896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.356916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.356935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.356954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.356973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.356994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.357012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.357033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.357051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.357071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.357090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.357110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.357129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.357154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.357174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.357194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.357213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.357233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.357252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.357272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.357290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.357310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.357328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.357349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.357368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.357391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.357410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.357440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.357461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.357482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.357501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.357523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.357542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.357563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.357581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.275 [2024-07-24 20:19:51.357601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.275 [2024-07-24 20:19:51.357619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.357640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.357658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.357683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.357703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.357723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.357742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.357763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.357782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.357803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.357821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.357842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.357860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.357880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.357898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.357918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.357937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.357957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.357975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.357995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.358014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.358034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.358053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.358073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.358091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.358111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.358130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.358150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.358173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.358194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.358213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.358233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.358251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.358271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.358289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.358310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.358328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.358349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.358368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.358389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.358408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.358436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.276 [2024-07-24 20:19:51.358457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.358478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.358497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.358520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.358539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.358560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.358579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.358600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.358618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.358639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.358658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.358684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.358703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.358723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.358743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.358763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.358782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.358803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.276 [2024-07-24 20:19:51.358822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.358866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.276 [2024-07-24 20:19:51.358891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:8 PRP1 0x0 PRP2 0x0 00:25:53.276 [2024-07-24 20:19:51.358910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.359234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.276 [2024-07-24 20:19:51.359261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.276 [2024-07-24 20:19:51.359279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15496 len:8 PRP1 0x0 PRP2 0x0 00:25:53.276 [2024-07-24 20:19:51.359297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.359318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.276 [2024-07-24 20:19:51.359335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.276 [2024-07-24 20:19:51.359351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15504 len:8 PRP1 0x0 PRP2 0x0 00:25:53.276 [2024-07-24 20:19:51.359369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.359388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.276 [2024-07-24 20:19:51.359403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.276 [2024-07-24 20:19:51.359419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15512 len:8 PRP1 0x0 PRP2 0x0 00:25:53.276 [2024-07-24 20:19:51.359447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.359468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.276 [2024-07-24 20:19:51.359483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.276 [2024-07-24 20:19:51.359498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:8 PRP1 0x0 PRP2 0x0 00:25:53.276 [2024-07-24 20:19:51.359516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.276 [2024-07-24 20:19:51.359534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.276 [2024-07-24 20:19:51.359549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.277 [2024-07-24 20:19:51.359569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15528 len:8 PRP1 0x0 PRP2 0x0 00:25:53.277 [2024-07-24 20:19:51.359588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.277 [2024-07-24 20:19:51.359606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.277 [2024-07-24 20:19:51.359621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.277 [2024-07-24 20:19:51.359637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15536 len:8 PRP1 0x0 PRP2 0x0 00:25:53.277 [2024-07-24 20:19:51.359654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.277 [2024-07-24 20:19:51.359683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.277 [2024-07-24 20:19:51.359698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.277 [2024-07-24 20:19:51.359714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15544 len:8 PRP1 0x0 PRP2 0x0 00:25:53.277 [2024-07-24 20:19:51.359731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.277 [2024-07-24 20:19:51.359749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.277 [2024-07-24 20:19:51.359763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.277 [2024-07-24 20:19:51.359778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:8 PRP1 0x0 PRP2 0x0 00:25:53.277 [2024-07-24 20:19:51.359794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.277 [2024-07-24 20:19:51.359812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.277 [2024-07-24 20:19:51.359827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.277 [2024-07-24 20:19:51.359842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15560 len:8 PRP1 0x0 PRP2 0x0 00:25:53.277 [2024-07-24 20:19:51.359859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.277 [2024-07-24 20:19:51.359876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.277 [2024-07-24 20:19:51.359891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.277 [2024-07-24 20:19:51.359906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15568 len:8 PRP1 0x0 PRP2 0x0 00:25:53.277 [2024-07-24 20:19:51.359924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.277 [2024-07-24 20:19:51.359942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.277 [2024-07-24 20:19:51.359956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.277 [2024-07-24 20:19:51.359971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15576 len:8 PRP1 0x0 PRP2 0x0 00:25:53.277 [2024-07-24 20:19:51.359989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.277 [2024-07-24 20:19:51.360006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.277 [2024-07-24 20:19:51.360020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.277 [2024-07-24 20:19:51.360036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:8 PRP1 0x0 PRP2 0x0 00:25:53.277 [2024-07-24 20:19:51.360052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.277 [2024-07-24 20:19:51.360070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.277 [2024-07-24 20:19:51.360089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.277 [2024-07-24 20:19:51.360105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15592 len:8 PRP1 0x0 PRP2 0x0 00:25:53.277 [2024-07-24 20:19:51.360123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.277 [2024-07-24 20:19:51.360141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.277 [2024-07-24 20:19:51.360156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.277 [2024-07-24 20:19:51.360171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15600 len:8 PRP1 0x0 PRP2 0x0 00:25:53.277 [2024-07-24 20:19:51.360188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.277 [2024-07-24 20:19:51.360205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.277 [2024-07-24 20:19:51.360220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.277 [2024-07-24 20:19:51.360235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15608 len:8 PRP1 0x0 PRP2 0x0 00:25:53.277 [2024-07-24 20:19:51.360252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.277 [2024-07-24 20:19:51.360270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.277 [2024-07-24 20:19:51.360284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.277 [2024-07-24 20:19:51.360299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:8 PRP1 0x0 PRP2 0x0 00:25:53.277 [2024-07-24 20:19:51.360316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.277 [2024-07-24 20:19:51.360335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.277 [2024-07-24 20:19:51.360350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.277 [2024-07-24 20:19:51.360366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15624 len:8 PRP1 0x0 PRP2 0x0 00:25:53.277 [2024-07-24 20:19:51.360383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.277 [2024-07-24 20:19:51.360401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.277 [2024-07-24 20:19:51.360416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.277 [2024-07-24 20:19:51.360442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15632 len:8 PRP1 0x0 PRP2 0x0 00:25:53.277 [2024-07-24 20:19:51.360473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.277 [2024-07-24 20:19:51.360492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.277 [2024-07-24 20:19:51.360507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.277 [2024-07-24 20:19:51.360523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14616 len:8 PRP1 0x0 PRP2 0x0 00:25:53.277 [2024-07-24 20:19:51.360541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.277 [2024-07-24 20:19:51.360560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.277 [2024-07-24 20:19:51.360575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.277 [2024-07-24 20:19:51.360591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14624 len:8 PRP1 0x0 PRP2 0x0 00:25:53.277 [2024-07-24 20:19:51.360609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.277 [2024-07-24 20:19:51.360637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.277 [2024-07-24 20:19:51.360654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.277 [2024-07-24 20:19:51.360670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14632 len:8 PRP1 0x0 PRP2 0x0 00:25:53.277 [2024-07-24 20:19:51.360689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.277 [2024-07-24 20:19:51.360707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.277 [2024-07-24 20:19:51.360721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.277 [2024-07-24 20:19:51.360737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14640 len:8 PRP1 0x0 PRP2 0x0 00:25:53.277 [2024-07-24 20:19:51.360755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.277 [2024-07-24 20:19:51.360773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.277 [2024-07-24 20:19:51.360787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.277 [2024-07-24 20:19:51.360803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14648 len:8 PRP1 0x0 PRP2 0x0 00:25:53.277 [2024-07-24 20:19:51.360820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.277 [2024-07-24 20:19:51.360838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.277 [2024-07-24 20:19:51.360852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.277 [2024-07-24 20:19:51.360868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14656 len:8 PRP1 0x0 PRP2 0x0 00:25:53.277 [2024-07-24 20:19:51.360886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.277 [2024-07-24 20:19:51.360904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.277 [2024-07-24 20:19:51.360918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.277 [2024-07-24 20:19:51.360934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14664 len:8 PRP1 0x0 PRP2 0x0 00:25:53.277 [2024-07-24 20:19:51.360951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.277 [2024-07-24 20:19:51.360969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.277 [2024-07-24 20:19:51.360983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.277 [2024-07-24 20:19:51.360999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14672 len:8 PRP1 0x0 PRP2 0x0 00:25:53.277 [2024-07-24 20:19:51.361017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.277 [2024-07-24 20:19:51.361036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.277 [2024-07-24 20:19:51.361050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.277 [2024-07-24 20:19:51.361065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14680 len:8 PRP1 0x0 PRP2 0x0 00:25:53.277 [2024-07-24 20:19:51.361083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.277 [2024-07-24 20:19:51.361100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.277 [2024-07-24 20:19:51.361115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.277 [2024-07-24 20:19:51.361130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14688 len:8 PRP1 0x0 PRP2 0x0 00:25:53.278 [2024-07-24 20:19:51.361153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.278 [2024-07-24 20:19:51.361172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.278 [2024-07-24 20:19:51.361187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.278 [2024-07-24 20:19:51.361203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14696 len:8 PRP1 0x0 PRP2 0x0 00:25:53.278 [2024-07-24 20:19:51.361220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.278 [2024-07-24 20:19:51.361238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.278 [2024-07-24 20:19:51.361253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.278 [2024-07-24 20:19:51.361268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14704 len:8 PRP1 0x0 PRP2 0x0 00:25:53.278 [2024-07-24 20:19:51.361286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.278 [2024-07-24 20:19:51.361303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.278 [2024-07-24 20:19:51.361318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.278 [2024-07-24 20:19:51.361334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14712 len:8 PRP1 0x0 PRP2 0x0 00:25:53.278 [2024-07-24 20:19:51.361351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.278 [2024-07-24 20:19:51.361368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.278 [2024-07-24 20:19:51.361383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.278 [2024-07-24 20:19:51.361398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:8 PRP1 0x0 PRP2 0x0 00:25:53.278 [2024-07-24 20:19:51.361416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.278 [2024-07-24 20:19:51.361445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.278 [2024-07-24 20:19:51.361474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.278 [2024-07-24 20:19:51.361489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14728 len:8 PRP1 0x0 PRP2 0x0 00:25:53.278 [2024-07-24 20:19:51.361507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.278 [2024-07-24 20:19:51.361525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.278 [2024-07-24 20:19:51.361539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.278 [2024-07-24 20:19:51.361555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14736 len:8 PRP1 0x0 PRP2 0x0 00:25:53.278 [2024-07-24 20:19:51.361574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.278 [2024-07-24 20:19:51.361592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.278 [2024-07-24 20:19:51.361607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.278 [2024-07-24 20:19:51.361623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14744 len:8 PRP1 0x0 PRP2 0x0 00:25:53.278 [2024-07-24 20:19:51.361640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.278 [2024-07-24 20:19:51.361658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.278 [2024-07-24 20:19:51.361680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.278 [2024-07-24 20:19:51.361701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14752 len:8 PRP1 0x0 PRP2 0x0 00:25:53.278 [2024-07-24 20:19:51.361720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.278 [2024-07-24 20:19:51.361738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.278 [2024-07-24 20:19:51.361753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.278 [2024-07-24 20:19:51.361769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14760 len:8 PRP1 0x0 PRP2 0x0 00:25:53.278 [2024-07-24 20:19:51.361786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.278 [2024-07-24 20:19:51.361805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.278 [2024-07-24 20:19:51.361820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.278 [2024-07-24 20:19:51.361836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14768 len:8 PRP1 0x0 PRP2 0x0 00:25:53.278 [2024-07-24 20:19:51.361854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.278 [2024-07-24 20:19:51.361871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.278 [2024-07-24 20:19:51.361886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.278 [2024-07-24 20:19:51.361902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14776 len:8 PRP1 0x0 PRP2 0x0 00:25:53.278 [2024-07-24 20:19:51.361919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.278 [2024-07-24 20:19:51.361938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.278 [2024-07-24 20:19:51.361953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.278 [2024-07-24 20:19:51.361968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14784 len:8 PRP1 0x0 PRP2 0x0 00:25:53.278 [2024-07-24 20:19:51.361985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.278 [2024-07-24 20:19:51.362003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.278 [2024-07-24 20:19:51.362018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.278 [2024-07-24 20:19:51.362034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14792 len:8 PRP1 0x0 PRP2 0x0 00:25:53.278 [2024-07-24 20:19:51.362052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.278 [2024-07-24 20:19:51.362070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.278 [2024-07-24 20:19:51.362085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.278 [2024-07-24 20:19:51.362101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14800 len:8 PRP1 0x0 PRP2 0x0 00:25:53.278 [2024-07-24 20:19:51.362119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.278 [2024-07-24 20:19:51.362137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.278 [2024-07-24 20:19:51.362151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.278 [2024-07-24 20:19:51.362167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14808 len:8 PRP1 0x0 PRP2 0x0 00:25:53.278 [2024-07-24 20:19:51.362184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.278 [2024-07-24 20:19:51.362206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.278 [2024-07-24 20:19:51.362222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.278 [2024-07-24 20:19:51.362237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14816 len:8 PRP1 0x0 PRP2 0x0 00:25:53.278 [2024-07-24 20:19:51.362255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.278 [2024-07-24 20:19:51.362274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.278 [2024-07-24 20:19:51.362289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.278 [2024-07-24 20:19:51.362304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14824 len:8 PRP1 0x0 PRP2 0x0 00:25:53.278 [2024-07-24 20:19:51.362321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.278 [2024-07-24 20:19:51.362339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.278 [2024-07-24 20:19:51.362354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.278 [2024-07-24 20:19:51.362369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14832 len:8 PRP1 0x0 PRP2 0x0 00:25:53.278 [2024-07-24 20:19:51.362387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.278 [2024-07-24 20:19:51.362406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.278 [2024-07-24 20:19:51.362420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.278 [2024-07-24 20:19:51.362445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:8 PRP1 0x0 PRP2 0x0 00:25:53.278 [2024-07-24 20:19:51.362470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.278 [2024-07-24 20:19:51.362488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.278 [2024-07-24 20:19:51.362503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.278 [2024-07-24 20:19:51.362518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14856 len:8 PRP1 0x0 PRP2 0x0 00:25:53.278 [2024-07-24 20:19:51.362536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.278 [2024-07-24 20:19:51.362554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.278 [2024-07-24 20:19:51.362568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.278 [2024-07-24 20:19:51.362583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14864 len:8 PRP1 0x0 PRP2 0x0 00:25:53.278 [2024-07-24 20:19:51.362601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.278 [2024-07-24 20:19:51.362619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.278 [2024-07-24 20:19:51.362634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.278 [2024-07-24 20:19:51.362649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14872 len:8 PRP1 0x0 PRP2 0x0 00:25:53.278 [2024-07-24 20:19:51.362666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.278 [2024-07-24 20:19:51.362695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.278 [2024-07-24 20:19:51.362710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.279 [2024-07-24 20:19:51.362726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:8 PRP1 0x0 PRP2 0x0 00:25:53.279 [2024-07-24 20:19:51.362748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.279 [2024-07-24 20:19:51.362766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.279 [2024-07-24 20:19:51.362781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.279 [2024-07-24 20:19:51.362796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14888 len:8 PRP1 0x0 PRP2 0x0 00:25:53.279 [2024-07-24 20:19:51.362814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.279 [2024-07-24 20:19:51.362832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.279 [2024-07-24 20:19:51.362847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.279 [2024-07-24 20:19:51.362862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14896 len:8 PRP1 0x0 PRP2 0x0 00:25:53.279 [2024-07-24 20:19:51.362880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.279 [2024-07-24 20:19:51.362898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.279 [2024-07-24 20:19:51.362922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.279 [2024-07-24 20:19:51.362939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14904 len:8 PRP1 0x0 PRP2 0x0 00:25:53.279 [2024-07-24 20:19:51.362956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.279 [2024-07-24 20:19:51.362974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.279 [2024-07-24 20:19:51.362989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.279 [2024-07-24 20:19:51.363005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:8 PRP1 0x0 PRP2 0x0 00:25:53.279 [2024-07-24 20:19:51.363022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.279 [2024-07-24 20:19:51.363040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.279 [2024-07-24 20:19:51.363054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.279 [2024-07-24 20:19:51.363069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14920 len:8 PRP1 0x0 PRP2 0x0 00:25:53.279 [2024-07-24 20:19:51.363087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.279 [2024-07-24 20:19:51.363105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.279 [2024-07-24 20:19:51.363119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.279 [2024-07-24 20:19:51.363134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14928 len:8 PRP1 0x0 PRP2 0x0 00:25:53.279 [2024-07-24 20:19:51.363152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.279 [2024-07-24 20:19:51.363170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.279 [2024-07-24 20:19:51.363185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.279 [2024-07-24 20:19:51.363200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14936 len:8 PRP1 0x0 PRP2 0x0 00:25:53.279 [2024-07-24 20:19:51.363218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.279 [2024-07-24 20:19:51.363236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.279 [2024-07-24 20:19:51.363252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.279 [2024-07-24 20:19:51.363272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:8 PRP1 0x0 PRP2 0x0 00:25:53.279 [2024-07-24 20:19:51.363290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.279 [2024-07-24 20:19:51.363308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.279 [2024-07-24 20:19:51.363323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.279 [2024-07-24 20:19:51.363338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14952 len:8 PRP1 0x0 PRP2 0x0 00:25:53.279 [2024-07-24 20:19:51.363356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.279 [2024-07-24 20:19:51.363374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.279 [2024-07-24 20:19:51.363388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.279 [2024-07-24 20:19:51.363404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14960 len:8 PRP1 0x0 PRP2 0x0 00:25:53.279 [2024-07-24 20:19:51.363421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.279 [2024-07-24 20:19:51.363447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.279 [2024-07-24 20:19:51.363470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.279 [2024-07-24 20:19:51.363487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14968 len:8 PRP1 0x0 PRP2 0x0 00:25:53.279 [2024-07-24 20:19:51.363505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.279 [2024-07-24 20:19:51.363522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.279 [2024-07-24 20:19:51.363538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.279 [2024-07-24 20:19:51.363553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:8 PRP1 0x0 PRP2 0x0 00:25:53.279 [2024-07-24 20:19:51.363570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.279 [2024-07-24 20:19:51.363588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.279 [2024-07-24 20:19:51.363603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.279 [2024-07-24 20:19:51.363619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14984 len:8 PRP1 0x0 PRP2 0x0 00:25:53.279 [2024-07-24 20:19:51.363636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.279 [2024-07-24 20:19:51.363654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.279 [2024-07-24 20:19:51.363679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.279 [2024-07-24 20:19:51.363694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14992 len:8 PRP1 0x0 PRP2 0x0 00:25:53.279 [2024-07-24 20:19:51.363712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.279 [2024-07-24 20:19:51.363729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.279 [2024-07-24 20:19:51.363744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.279 [2024-07-24 20:19:51.363759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15000 len:8 PRP1 0x0 PRP2 0x0 00:25:53.279 [2024-07-24 20:19:51.363783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.279 [2024-07-24 20:19:51.363802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.279 [2024-07-24 20:19:51.363821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.279 [2024-07-24 20:19:51.363838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:8 PRP1 0x0 PRP2 0x0 00:25:53.279 [2024-07-24 20:19:51.363855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.279 [2024-07-24 20:19:51.363874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.279 [2024-07-24 20:19:51.363888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.279 [2024-07-24 20:19:51.363904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15016 len:8 PRP1 0x0 PRP2 0x0 00:25:53.279 [2024-07-24 20:19:51.363921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.279 [2024-07-24 20:19:51.363938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.279 [2024-07-24 20:19:51.363953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.279 [2024-07-24 20:19:51.363968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15024 len:8 PRP1 0x0 PRP2 0x0 00:25:53.279 [2024-07-24 20:19:51.363986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.279 [2024-07-24 20:19:51.364003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.279 [2024-07-24 20:19:51.364018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.279 [2024-07-24 20:19:51.364034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15032 len:8 PRP1 0x0 PRP2 0x0 00:25:53.279 [2024-07-24 20:19:51.364051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.279 [2024-07-24 20:19:51.364069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.279 [2024-07-24 20:19:51.364084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.279 [2024-07-24 20:19:51.364099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:8 PRP1 0x0 PRP2 0x0 00:25:53.279 [2024-07-24 20:19:51.364116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.279 [2024-07-24 20:19:51.364133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.279 [2024-07-24 20:19:51.364148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.279 [2024-07-24 20:19:51.364164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15048 len:8 PRP1 0x0 PRP2 0x0 00:25:53.279 [2024-07-24 20:19:51.364181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.279 [2024-07-24 20:19:51.364198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.279 [2024-07-24 20:19:51.364213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.279 [2024-07-24 20:19:51.364228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15056 len:8 PRP1 0x0 PRP2 0x0 00:25:53.279 [2024-07-24 20:19:51.364246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.279 [2024-07-24 20:19:51.364264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.279 [2024-07-24 20:19:51.364279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.280 [2024-07-24 20:19:51.364294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15064 len:8 PRP1 0x0 PRP2 0x0 00:25:53.280 [2024-07-24 20:19:51.364318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.280 [2024-07-24 20:19:51.364341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.280 [2024-07-24 20:19:51.364356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.280 [2024-07-24 20:19:51.364372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:8 PRP1 0x0 PRP2 0x0 00:25:53.280 [2024-07-24 20:19:51.364390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.280 [2024-07-24 20:19:51.364408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.280 [2024-07-24 20:19:51.364423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.280 [2024-07-24 20:19:51.364476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15080 len:8 PRP1 0x0 PRP2 0x0 00:25:53.280 [2024-07-24 20:19:51.364496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.280 [2024-07-24 20:19:51.364516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.280 [2024-07-24 20:19:51.364531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.280 [2024-07-24 20:19:51.364547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15088 len:8 PRP1 0x0 PRP2 0x0 00:25:53.280 [2024-07-24 20:19:51.364564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.280 [2024-07-24 20:19:51.364582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.280 [2024-07-24 20:19:51.364597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.280 [2024-07-24 20:19:51.364613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15096 len:8 PRP1 0x0 PRP2 0x0 00:25:53.280 [2024-07-24 20:19:51.364638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.280 [2024-07-24 20:19:51.364656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.280 [2024-07-24 20:19:51.364672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.280 [2024-07-24 20:19:51.364687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:8 PRP1 0x0 PRP2 0x0 00:25:53.280 [2024-07-24 20:19:51.364710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.280 [2024-07-24 20:19:51.364728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.280 [2024-07-24 20:19:51.364742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.280 [2024-07-24 20:19:51.364758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15112 len:8 PRP1 0x0 PRP2 0x0 00:25:53.280 [2024-07-24 20:19:51.364775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.280 [2024-07-24 20:19:51.364793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.280 [2024-07-24 20:19:51.364807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.280 [2024-07-24 20:19:51.364822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15120 len:8 PRP1 0x0 PRP2 0x0 00:25:53.280 [2024-07-24 20:19:51.364839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.280 [2024-07-24 20:19:51.364857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.280 [2024-07-24 20:19:51.364872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.280 [2024-07-24 20:19:51.364887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15128 len:8 PRP1 0x0 PRP2 0x0 00:25:53.280 [2024-07-24 20:19:51.364916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.280 [2024-07-24 20:19:51.364935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.280 [2024-07-24 20:19:51.364950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.280 [2024-07-24 20:19:51.364965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:8 PRP1 0x0 PRP2 0x0 00:25:53.280 [2024-07-24 20:19:51.364982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.280 [2024-07-24 20:19:51.365000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.280 [2024-07-24 20:19:51.365015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.280 [2024-07-24 20:19:51.365030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15144 len:8 PRP1 0x0 PRP2 0x0 00:25:53.280 [2024-07-24 20:19:51.365048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.280 [2024-07-24 20:19:51.365066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.280 [2024-07-24 20:19:51.365081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.280 [2024-07-24 20:19:51.365095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15152 len:8 PRP1 0x0 PRP2 0x0 00:25:53.280 [2024-07-24 20:19:51.365113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.280 [2024-07-24 20:19:51.365130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.280 [2024-07-24 20:19:51.365145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.280 [2024-07-24 20:19:51.365160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15160 len:8 PRP1 0x0 PRP2 0x0 00:25:53.280 [2024-07-24 20:19:51.365184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.280 [2024-07-24 20:19:51.365202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.280 [2024-07-24 20:19:51.365217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.280 [2024-07-24 20:19:51.365232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:8 PRP1 0x0 PRP2 0x0 00:25:53.280 [2024-07-24 20:19:51.365249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.280 [2024-07-24 20:19:51.365266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.280 [2024-07-24 20:19:51.365281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.280 [2024-07-24 20:19:51.365296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15176 len:8 PRP1 0x0 PRP2 0x0 00:25:53.280 [2024-07-24 20:19:51.365313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.280 [2024-07-24 20:19:51.365331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.280 [2024-07-24 20:19:51.365345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.280 [2024-07-24 20:19:51.365360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15184 len:8 PRP1 0x0 PRP2 0x0 00:25:53.280 [2024-07-24 20:19:51.365378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.280 [2024-07-24 20:19:51.365396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.280 [2024-07-24 20:19:51.365410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.280 [2024-07-24 20:19:51.365438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15192 len:8 PRP1 0x0 PRP2 0x0 00:25:53.280 [2024-07-24 20:19:51.365478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.280 [2024-07-24 20:19:51.365497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.280 [2024-07-24 20:19:51.365512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.280 [2024-07-24 20:19:51.365527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:8 PRP1 0x0 PRP2 0x0 00:25:53.280 [2024-07-24 20:19:51.365544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.280 [2024-07-24 20:19:51.365562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.280 [2024-07-24 20:19:51.365577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.280 [2024-07-24 20:19:51.365592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15208 len:8 PRP1 0x0 PRP2 0x0 00:25:53.280 [2024-07-24 20:19:51.365609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.280 [2024-07-24 20:19:51.365626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.280 [2024-07-24 20:19:51.365640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.280 [2024-07-24 20:19:51.365655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15216 len:8 PRP1 0x0 PRP2 0x0 00:25:53.281 [2024-07-24 20:19:51.365684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.281 [2024-07-24 20:19:51.365701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.281 [2024-07-24 20:19:51.365715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.281 [2024-07-24 20:19:51.365730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15224 len:8 PRP1 0x0 PRP2 0x0 00:25:53.281 [2024-07-24 20:19:51.365754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.281 [2024-07-24 20:19:51.365772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.281 [2024-07-24 20:19:51.365787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.281 [2024-07-24 20:19:51.365802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:8 PRP1 0x0 PRP2 0x0 00:25:53.281 [2024-07-24 20:19:51.365819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.281 [2024-07-24 20:19:51.365836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.281 [2024-07-24 20:19:51.365851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.281 [2024-07-24 20:19:51.365866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15240 len:8 PRP1 0x0 PRP2 0x0 00:25:53.281 [2024-07-24 20:19:51.365883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.281 [2024-07-24 20:19:51.365900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.281 [2024-07-24 20:19:51.365915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.281 [2024-07-24 20:19:51.365930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15248 len:8 PRP1 0x0 PRP2 0x0 00:25:53.281 [2024-07-24 20:19:51.372677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.281 [2024-07-24 20:19:51.372729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.281 [2024-07-24 20:19:51.372749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.281 [2024-07-24 20:19:51.372766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15256 len:8 PRP1 0x0 PRP2 0x0 00:25:53.281 [2024-07-24 20:19:51.372786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.281 [2024-07-24 20:19:51.372805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.281 [2024-07-24 20:19:51.372820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.281 [2024-07-24 20:19:51.372836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:8 PRP1 0x0 PRP2 0x0 00:25:53.281 [2024-07-24 20:19:51.372854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.281 [2024-07-24 20:19:51.372872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.281 [2024-07-24 20:19:51.372886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.281 [2024-07-24 20:19:51.372902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15272 len:8 PRP1 0x0 PRP2 0x0 00:25:53.281 [2024-07-24 20:19:51.372919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.281 [2024-07-24 20:19:51.372937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.281 [2024-07-24 20:19:51.372952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.281 [2024-07-24 20:19:51.372967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15280 len:8 PRP1 0x0 PRP2 0x0 00:25:53.281 [2024-07-24 20:19:51.372985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.281 [2024-07-24 20:19:51.373003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.281 [2024-07-24 20:19:51.373018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.281 [2024-07-24 20:19:51.373033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15288 len:8 PRP1 0x0 PRP2 0x0 00:25:53.281 [2024-07-24 20:19:51.373051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.281 [2024-07-24 20:19:51.373069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.281 [2024-07-24 20:19:51.373084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.281 [2024-07-24 20:19:51.373099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:8 PRP1 0x0 PRP2 0x0 00:25:53.281 [2024-07-24 20:19:51.373116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.281 [2024-07-24 20:19:51.373134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.281 [2024-07-24 20:19:51.373148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.281 [2024-07-24 20:19:51.373163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15304 len:8 PRP1 0x0 PRP2 0x0 00:25:53.281 [2024-07-24 20:19:51.373181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.281 [2024-07-24 20:19:51.373199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.281 [2024-07-24 20:19:51.373213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.281 [2024-07-24 20:19:51.373228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15312 len:8 PRP1 0x0 PRP2 0x0 00:25:53.281 [2024-07-24 20:19:51.373251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.281 [2024-07-24 20:19:51.373270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.281 [2024-07-24 20:19:51.373285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.281 [2024-07-24 20:19:51.373300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15320 len:8 PRP1 0x0 PRP2 0x0 00:25:53.281 [2024-07-24 20:19:51.373318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.281 [2024-07-24 20:19:51.373336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.281 [2024-07-24 20:19:51.373351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.281 [2024-07-24 20:19:51.373366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:8 PRP1 0x0 PRP2 0x0 00:25:53.281 [2024-07-24 20:19:51.373384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.281 [2024-07-24 20:19:51.373401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.281 [2024-07-24 20:19:51.373416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.281 [2024-07-24 20:19:51.373445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15336 len:8 PRP1 0x0 PRP2 0x0 00:25:53.281 [2024-07-24 20:19:51.373465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.281 [2024-07-24 20:19:51.373483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.281 [2024-07-24 20:19:51.373498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.281 [2024-07-24 20:19:51.373513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15344 len:8 PRP1 0x0 PRP2 0x0 00:25:53.281 [2024-07-24 20:19:51.373530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.281 [2024-07-24 20:19:51.373549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.281 [2024-07-24 20:19:51.373564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.281 [2024-07-24 20:19:51.373579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15352 len:8 PRP1 0x0 PRP2 0x0 00:25:53.281 [2024-07-24 20:19:51.373596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.281 [2024-07-24 20:19:51.373614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.281 [2024-07-24 20:19:51.373631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.281 [2024-07-24 20:19:51.373646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:8 PRP1 0x0 PRP2 0x0 00:25:53.281 [2024-07-24 20:19:51.373663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.281 [2024-07-24 20:19:51.373681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.281 [2024-07-24 20:19:51.373696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.281 [2024-07-24 20:19:51.373711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15368 len:8 PRP1 0x0 PRP2 0x0 00:25:53.281 [2024-07-24 20:19:51.373729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.281 [2024-07-24 20:19:51.373747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.281 [2024-07-24 20:19:51.373761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.281 [2024-07-24 20:19:51.373781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15376 len:8 PRP1 0x0 PRP2 0x0 00:25:53.281 [2024-07-24 20:19:51.373800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.281 [2024-07-24 20:19:51.373818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.281 [2024-07-24 20:19:51.373833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.281 [2024-07-24 20:19:51.373849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15384 len:8 PRP1 0x0 PRP2 0x0 00:25:53.281 [2024-07-24 20:19:51.373867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.281 [2024-07-24 20:19:51.373885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.281 [2024-07-24 20:19:51.373900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.281 [2024-07-24 20:19:51.373915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:8 PRP1 0x0 PRP2 0x0 00:25:53.281 [2024-07-24 20:19:51.373932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.281 [2024-07-24 20:19:51.373950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.281 [2024-07-24 20:19:51.373965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.281 [2024-07-24 20:19:51.373980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15400 len:8 PRP1 0x0 PRP2 0x0 00:25:53.282 [2024-07-24 20:19:51.373997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.282 [2024-07-24 20:19:51.374015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.282 [2024-07-24 20:19:51.374030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.282 [2024-07-24 20:19:51.374046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15408 len:8 PRP1 0x0 PRP2 0x0 00:25:53.282 [2024-07-24 20:19:51.374063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.282 [2024-07-24 20:19:51.374081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.282 [2024-07-24 20:19:51.374097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.282 [2024-07-24 20:19:51.374112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14840 len:8 PRP1 0x0 PRP2 0x0 00:25:53.282 [2024-07-24 20:19:51.374130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.282 [2024-07-24 20:19:51.374147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.282 [2024-07-24 20:19:51.374162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.282 [2024-07-24 20:19:51.374177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15416 len:8 PRP1 0x0 PRP2 0x0 00:25:53.282 [2024-07-24 20:19:51.374194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.282 [2024-07-24 20:19:51.374211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.282 [2024-07-24 20:19:51.374226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.282 [2024-07-24 20:19:51.374243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:8 PRP1 0x0 PRP2 0x0 00:25:53.282 [2024-07-24 20:19:51.374259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.282 [2024-07-24 20:19:51.374277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.282 [2024-07-24 20:19:51.374296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.282 [2024-07-24 20:19:51.374312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15432 len:8 PRP1 0x0 PRP2 0x0 00:25:53.282 [2024-07-24 20:19:51.374330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.282 [2024-07-24 20:19:51.374349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.282 [2024-07-24 20:19:51.374363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.282 [2024-07-24 20:19:51.374379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15440 len:8 PRP1 0x0 PRP2 0x0 00:25:53.282 [2024-07-24 20:19:51.374397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.282 [2024-07-24 20:19:51.374415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.282 [2024-07-24 20:19:51.374438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.282 [2024-07-24 20:19:51.374455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15448 len:8 PRP1 0x0 PRP2 0x0 00:25:53.282 [2024-07-24 20:19:51.374473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.282 [2024-07-24 20:19:51.374491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.282 [2024-07-24 20:19:51.374506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.282 [2024-07-24 20:19:51.374521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:8 PRP1 0x0 PRP2 0x0 00:25:53.282 [2024-07-24 20:19:51.374538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.282 [2024-07-24 20:19:51.374556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.282 [2024-07-24 20:19:51.374571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.282 [2024-07-24 20:19:51.374587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15464 len:8 PRP1 0x0 PRP2 0x0 00:25:53.282 [2024-07-24 20:19:51.374604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.282 [2024-07-24 20:19:51.374621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.282 [2024-07-24 20:19:51.374636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.282 [2024-07-24 20:19:51.374652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15472 len:8 PRP1 0x0 PRP2 0x0 00:25:53.282 [2024-07-24 20:19:51.374669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.282 [2024-07-24 20:19:51.374687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.282 [2024-07-24 20:19:51.374701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.282 [2024-07-24 20:19:51.374717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15480 len:8 PRP1 0x0 PRP2 0x0 00:25:53.282 [2024-07-24 20:19:51.374733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.282 [2024-07-24 20:19:51.374751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.282 [2024-07-24 20:19:51.374765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.282 [2024-07-24 20:19:51.374781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:8 PRP1 0x0 PRP2 0x0 00:25:53.282 [2024-07-24 20:19:51.374798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.282 [2024-07-24 20:19:51.374883] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13d9d80 was disconnected and freed. reset controller. 00:25:53.282 [2024-07-24 20:19:51.374911] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:53.282 [2024-07-24 20:19:51.374964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.282 [2024-07-24 20:19:51.374990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.282 [2024-07-24 20:19:51.375011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.282 [2024-07-24 20:19:51.375030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.282 [2024-07-24 20:19:51.375049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.282 [2024-07-24 20:19:51.375069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.282 [2024-07-24 20:19:51.375088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.282 [2024-07-24 20:19:51.375107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.282 [2024-07-24 20:19:51.375125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.282 [2024-07-24 20:19:51.375177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b3790 (9): Bad file descriptor 00:25:53.282 [2024-07-24 20:19:51.379609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.282 [2024-07-24 20:19:51.506307] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:53.282 00:25:53.282 Latency(us) 00:25:53.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.282 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:53.282 Verification LBA range: start 0x0 length 0x4000 00:25:53.282 NVMe0n1 : 15.05 6341.74 24.77 486.78 0.00 18659.42 731.21 45049.93 00:25:53.282 =================================================================================================================== 00:25:53.282 Total : 6341.74 24.77 486.78 0.00 18659.42 731.21 45049.93 00:25:53.282 Received shutdown signal, test time was about 15.000000 seconds 00:25:53.282 00:25:53.282 Latency(us) 00:25:53.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.282 =================================================================================================================== 00:25:53.282 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:53.282 20:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:53.282 20:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:53.282 20:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:53.282 20:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2126691 00:25:53.282 20:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:53.282 20:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2126691 /var/tmp/bdevperf.sock 00:25:53.282 20:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2126691 ']' 00:25:53.282 20:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:53.282 20:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:53.282 20:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:53.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:53.282 20:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:53.282 20:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:53.541 20:19:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:53.541 20:19:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:53.541 20:19:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:53.799 [2024-07-24 20:19:57.558398] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:53.799 20:19:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:54.366 [2024-07-24 20:19:57.859382] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:54.366 20:19:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:54.624 NVMe0n1 00:25:54.624 20:19:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:55.189 00:25:55.190 20:19:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:55.755 00:25:55.755 20:19:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:55.755 20:19:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:56.014 20:19:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:56.579 20:20:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:59.861 20:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:59.861 20:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:59.861 20:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2127484 00:25:59.861 20:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:59.861 20:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2127484 00:26:00.795 0 00:26:01.065 20:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:01.065 [2024-07-24 20:19:56.897315] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:26:01.065 [2024-07-24 20:19:56.897414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2126691 ] 00:26:01.065 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.065 [2024-07-24 20:19:56.974243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.065 [2024-07-24 20:19:57.110729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.065 [2024-07-24 20:20:00.066807] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:01.065 [2024-07-24 20:20:00.066919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.065 [2024-07-24 20:20:00.066951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.065 [2024-07-24 20:20:00.066987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.065 [2024-07-24 20:20:00.067006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.065 [2024-07-24 20:20:00.067026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.065 [2024-07-24 20:20:00.067045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.065 [2024-07-24 20:20:00.067064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.065 [2024-07-24 20:20:00.067084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.065 [2024-07-24 20:20:00.067103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.065 [2024-07-24 20:20:00.067168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.065 [2024-07-24 20:20:00.067213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd7790 (9): Bad file descriptor 00:26:01.065 [2024-07-24 20:20:00.073976] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:01.065 Running I/O for 1 seconds... 00:26:01.065 00:26:01.065 Latency(us) 00:26:01.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.065 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:01.065 Verification LBA range: start 0x0 length 0x4000 00:26:01.065 NVMe0n1 : 1.01 6354.32 24.82 0.00 0.00 20046.50 3883.61 15922.82 00:26:01.065 =================================================================================================================== 00:26:01.065 Total : 6354.32 24.82 0.00 0.00 20046.50 3883.61 15922.82 00:26:01.065 20:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:01.065 20:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:01.343 20:20:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:01.601 20:20:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:01.601 20:20:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:01.860 20:20:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:02.117 20:20:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:05.396 20:20:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:05.396 20:20:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:05.396 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2126691 00:26:05.396 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2126691 ']' 00:26:05.396 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2126691 00:26:05.396 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:26:05.396 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:05.396 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2126691 00:26:05.654 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:05.654 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:05.654 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2126691' 00:26:05.654 killing process with pid 2126691 00:26:05.654 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2126691 00:26:05.654 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2126691 00:26:05.913 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:05.913 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:06.170 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:06.170 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:06.170 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:06.170 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:06.170 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:26:06.170 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:06.170 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:26:06.170 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:06.170 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:06.170 rmmod nvme_tcp 00:26:06.170 rmmod nvme_fabrics 00:26:06.170 rmmod nvme_keyring 00:26:06.170 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:06.170 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:26:06.170 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:26:06.170 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2124169 ']' 00:26:06.170 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2124169 00:26:06.170 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2124169 ']' 00:26:06.170 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2124169 00:26:06.170 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:26:06.170 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:06.170 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2124169 00:26:06.170 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:06.170 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:06.170 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2124169' 00:26:06.170 killing process with pid 2124169 00:26:06.428 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2124169 00:26:06.428 20:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2124169 00:26:06.687 20:20:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:06.687 20:20:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:06.687 20:20:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:06.687 20:20:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:06.687 20:20:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:06.687 20:20:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.687 20:20:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:06.687 20:20:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.591 20:20:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:08.591 00:26:08.591 real 0m40.249s 00:26:08.591 user 2m21.110s 00:26:08.591 sys 0m7.197s 00:26:08.591 20:20:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:08.591 20:20:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:08.591 ************************************ 00:26:08.591 END TEST nvmf_failover 00:26:08.591 ************************************ 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.850 ************************************ 00:26:08.850 START TEST nvmf_host_discovery 00:26:08.850 ************************************ 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:08.850 * Looking for test storage... 00:26:08.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.850 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:26:08.851 20:20:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:12.136 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:12.136 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:12.136 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:12.137 Found net devices under 0000:84:00.0: cvl_0_0 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:12.137 Found net devices under 0000:84:00.1: cvl_0_1 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:12.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:12.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:26:12.137 00:26:12.137 --- 10.0.0.2 ping statistics --- 00:26:12.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.137 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:12.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:12.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:26:12.137 00:26:12.137 --- 10.0.0.1 ping statistics --- 00:26:12.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.137 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2130236 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2130236 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2130236 ']' 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:12.137 20:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.137 [2024-07-24 20:20:15.513976] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:26:12.137 [2024-07-24 20:20:15.514076] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.137 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.137 [2024-07-24 20:20:15.605333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.137 [2024-07-24 20:20:15.744580] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:12.137 [2024-07-24 20:20:15.744642] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:12.137 [2024-07-24 20:20:15.744662] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:12.137 [2024-07-24 20:20:15.744678] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:12.137 [2024-07-24 20:20:15.744692] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:12.137 [2024-07-24 20:20:15.744726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.072 [2024-07-24 20:20:16.627383] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.072 [2024-07-24 20:20:16.635625] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.072 null0 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.072 null1 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2130390 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2130390 /tmp/host.sock 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2130390 ']' 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:13.072 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:13.072 20:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.072 [2024-07-24 20:20:16.721495] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:26:13.072 [2024-07-24 20:20:16.721587] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2130390 ] 00:26:13.072 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.072 [2024-07-24 20:20:16.802418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.330 [2024-07-24 20:20:16.941330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:14.262 20:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.520 [2024-07-24 20:20:18.176003] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:14.520 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.778 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:14.778 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:14.778 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:14.778 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:14.778 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:14.778 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:14.778 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:26:14.779 20:20:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:26:15.037 [2024-07-24 20:20:18.787900] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:15.037 [2024-07-24 20:20:18.787937] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:15.037 [2024-07-24 20:20:18.787970] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:15.295 [2024-07-24 20:20:18.874258] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:15.295 [2024-07-24 20:20:18.980144] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:15.295 [2024-07-24 20:20:18.980177] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:15.860 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:16.118 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:16.377 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.377 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:16.377 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:16.377 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:16.377 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:16.377 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:16.377 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:16.377 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:16.377 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:16.377 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:16.377 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:16.377 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:16.377 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.377 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:16.377 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.377 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.377 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:16.377 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:16.377 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:16.377 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:16.377 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:16.377 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.377 20:20:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.377 [2024-07-24 20:20:19.997600] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:16.377 [2024-07-24 20:20:19.998835] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:16.377 [2024-07-24 20:20:19.998882] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:16.377 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:16.378 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:16.378 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:16.378 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:16.378 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:16.378 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:16.378 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:16.378 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.378 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.378 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:16.378 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:16.378 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:16.378 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.378 [2024-07-24 20:20:20.124902] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:16.378 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:16.378 20:20:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:26:16.635 [2024-07-24 20:20:20.183675] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:16.636 [2024-07-24 20:20:20.183709] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:16.636 [2024-07-24 20:20:20.183722] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.569 [2024-07-24 20:20:21.294302] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:17.569 [2024-07-24 20:20:21.294350] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:17.569 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:17.569 [2024-07-24 20:20:21.300320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.569 [2024-07-24 20:20:21.300389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.569 [2024-07-24 20:20:21.300413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.569 [2024-07-24 20:20:21.300439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.569 [2024-07-24 20:20:21.300460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.570 [2024-07-24 20:20:21.300478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.570 [2024-07-24 20:20:21.300497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.570 [2024-07-24 20:20:21.300514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.570 [2024-07-24 20:20:21.300532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2043230 is same with the state(5) to be set 00:26:17.570 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:17.570 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:17.570 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:17.570 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.570 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.570 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:17.570 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:17.570 [2024-07-24 20:20:21.310320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2043230 (9): Bad file descriptor 00:26:17.570 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.570 [2024-07-24 20:20:21.320379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:17.570 [2024-07-24 20:20:21.320642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.570 [2024-07-24 20:20:21.320684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2043230 with addr=10.0.0.2, port=4420 00:26:17.570 [2024-07-24 20:20:21.320707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2043230 is same with the state(5) to be set 00:26:17.570 [2024-07-24 20:20:21.320738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2043230 (9): Bad file descriptor 00:26:17.570 [2024-07-24 20:20:21.320767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:17.570 [2024-07-24 20:20:21.320786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:17.570 [2024-07-24 20:20:21.320806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:17.570 [2024-07-24 20:20:21.320833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.570 [2024-07-24 20:20:21.330489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:17.570 [2024-07-24 20:20:21.330677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.570 [2024-07-24 20:20:21.330713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2043230 with addr=10.0.0.2, port=4420 00:26:17.570 [2024-07-24 20:20:21.330734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2043230 is same with the state(5) to be set 00:26:17.570 [2024-07-24 20:20:21.330776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2043230 (9): Bad file descriptor 00:26:17.570 [2024-07-24 20:20:21.330805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:17.570 [2024-07-24 20:20:21.330824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:17.570 [2024-07-24 20:20:21.330842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:17.570 [2024-07-24 20:20:21.330867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.570 [2024-07-24 20:20:21.340582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:17.570 [2024-07-24 20:20:21.340833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.570 [2024-07-24 20:20:21.340873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2043230 with addr=10.0.0.2, port=4420 00:26:17.570 [2024-07-24 20:20:21.340895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2043230 is same with the state(5) to be set 00:26:17.570 [2024-07-24 20:20:21.340925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2043230 (9): Bad file descriptor 00:26:17.570 [2024-07-24 20:20:21.340953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:17.570 [2024-07-24 20:20:21.340972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:17.570 [2024-07-24 20:20:21.340990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:17.570 [2024-07-24 20:20:21.341016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.570 [2024-07-24 20:20:21.350676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:17.570 [2024-07-24 20:20:21.350938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.570 [2024-07-24 20:20:21.350975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2043230 with addr=10.0.0.2, port=4420 00:26:17.570 [2024-07-24 20:20:21.350997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2043230 is same with the state(5) to be set 00:26:17.570 [2024-07-24 20:20:21.351027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2043230 (9): Bad file descriptor 00:26:17.570 [2024-07-24 20:20:21.351054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:17.570 [2024-07-24 20:20:21.351074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:17.570 [2024-07-24 20:20:21.351092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:17.570 [2024-07-24 20:20:21.351118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.829 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.829 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:17.829 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:17.829 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:17.829 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:17.829 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:17.829 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:17.829 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:17.829 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:17.829 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:17.829 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.829 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:17.829 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.829 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:17.829 [2024-07-24 20:20:21.360777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:17.829 [2024-07-24 20:20:21.361053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.829 [2024-07-24 20:20:21.361090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2043230 with addr=10.0.0.2, port=4420 00:26:17.829 [2024-07-24 20:20:21.361112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2043230 is same with the state(5) to be set 00:26:17.829 [2024-07-24 20:20:21.361142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2043230 (9): Bad file descriptor 00:26:17.829 [2024-07-24 20:20:21.361169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:17.829 [2024-07-24 20:20:21.361189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:17.829 [2024-07-24 20:20:21.361208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:17.829 [2024-07-24 20:20:21.361238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.829 [2024-07-24 20:20:21.370869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:17.829 [2024-07-24 20:20:21.371219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.829 [2024-07-24 20:20:21.371270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2043230 with addr=10.0.0.2, port=4420 00:26:17.829 [2024-07-24 20:20:21.371293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2043230 is same with the state(5) to be set 00:26:17.829 [2024-07-24 20:20:21.371324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2043230 (9): Bad file descriptor 00:26:17.829 [2024-07-24 20:20:21.371352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:17.829 [2024-07-24 20:20:21.371370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:17.829 [2024-07-24 20:20:21.371389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:17.829 [2024-07-24 20:20:21.371415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.829 [2024-07-24 20:20:21.380961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:17.829 [2024-07-24 20:20:21.381236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.829 [2024-07-24 20:20:21.381275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2043230 with addr=10.0.0.2, port=4420 00:26:17.829 [2024-07-24 20:20:21.381296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2043230 is same with the state(5) to be set 00:26:17.829 [2024-07-24 20:20:21.381327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2043230 (9): Bad file descriptor 00:26:17.829 [2024-07-24 20:20:21.381355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:17.829 [2024-07-24 20:20:21.381373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:17.829 [2024-07-24 20:20:21.381396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:17.829 [2024-07-24 20:20:21.381421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.829 [2024-07-24 20:20:21.381730] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:17.829 [2024-07-24 20:20:21.381769] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:17.830 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.088 20:20:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.461 [2024-07-24 20:20:22.838606] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:19.461 [2024-07-24 20:20:22.838648] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:19.461 [2024-07-24 20:20:22.838681] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:19.461 [2024-07-24 20:20:22.966078] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:19.461 [2024-07-24 20:20:23.238879] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:19.461 [2024-07-24 20:20:23.238941] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:19.461 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.461 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:19.461 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:19.461 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:19.461 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:19.461 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:19.461 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:19.461 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:19.462 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:19.462 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.462 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.721 request: 00:26:19.721 { 00:26:19.721 "name": "nvme", 00:26:19.721 "trtype": "tcp", 00:26:19.722 "traddr": "10.0.0.2", 00:26:19.722 "adrfam": "ipv4", 00:26:19.722 "trsvcid": "8009", 00:26:19.722 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:19.722 "wait_for_attach": true, 00:26:19.722 "method": "bdev_nvme_start_discovery", 00:26:19.722 "req_id": 1 00:26:19.722 } 00:26:19.722 Got JSON-RPC error response 00:26:19.722 response: 00:26:19.722 { 00:26:19.722 "code": -17, 00:26:19.722 "message": "File exists" 00:26:19.722 } 00:26:19.722 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:19.722 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:19.722 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:19.722 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:19.722 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:19.722 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:19.722 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:19.722 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.722 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:19.722 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.722 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:19.722 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:19.722 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.722 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:19.722 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:19.722 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:19.722 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:19.722 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.722 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.723 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:19.723 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:19.723 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.723 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:19.723 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:19.723 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:19.723 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:19.723 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:19.723 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:19.723 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:19.723 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:19.723 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:19.723 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.723 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.723 request: 00:26:19.723 { 00:26:19.723 "name": "nvme_second", 00:26:19.723 "trtype": "tcp", 00:26:19.723 "traddr": "10.0.0.2", 00:26:19.723 "adrfam": "ipv4", 00:26:19.723 "trsvcid": "8009", 00:26:19.723 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:19.723 "wait_for_attach": true, 00:26:19.723 "method": "bdev_nvme_start_discovery", 00:26:19.723 "req_id": 1 00:26:19.723 } 00:26:19.723 Got JSON-RPC error response 00:26:19.723 response: 00:26:19.723 { 00:26:19.723 "code": -17, 00:26:19.723 "message": "File exists" 00:26:19.723 } 00:26:19.723 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:19.724 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:19.724 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:19.724 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:19.724 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:19.724 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:19.724 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:19.724 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.724 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:19.724 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.724 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:19.724 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:19.724 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.724 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:19.724 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:19.724 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:19.724 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.724 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.724 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:19.724 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:19.724 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:19.724 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.724 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:19.724 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:19.724 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:19.725 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:19.725 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:19.725 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:19.725 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:19.725 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:19.725 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:19.725 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.725 20:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.125 [2024-07-24 20:20:24.475053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.125 [2024-07-24 20:20:24.475117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203f430 with addr=10.0.0.2, port=8010 00:26:21.125 [2024-07-24 20:20:24.475157] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:21.125 [2024-07-24 20:20:24.475178] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:21.125 [2024-07-24 20:20:24.475195] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:22.058 [2024-07-24 20:20:25.477610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-24 20:20:25.477693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203f430 with addr=10.0.0.2, port=8010 00:26:22.058 [2024-07-24 20:20:25.477738] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:22.058 [2024-07-24 20:20:25.477758] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:22.058 [2024-07-24 20:20:25.477775] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:22.992 [2024-07-24 20:20:26.479625] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:22.992 request: 00:26:22.992 { 00:26:22.992 "name": "nvme_second", 00:26:22.992 "trtype": "tcp", 00:26:22.992 "traddr": "10.0.0.2", 00:26:22.992 "adrfam": "ipv4", 00:26:22.992 "trsvcid": "8010", 00:26:22.992 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:22.992 "wait_for_attach": false, 00:26:22.992 "attach_timeout_ms": 3000, 00:26:22.992 "method": "bdev_nvme_start_discovery", 00:26:22.992 "req_id": 1 00:26:22.992 } 00:26:22.992 Got JSON-RPC error response 00:26:22.992 response: 00:26:22.992 { 00:26:22.992 "code": -110, 00:26:22.992 "message": "Connection timed out" 00:26:22.992 } 00:26:22.992 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:22.992 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:22.992 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:22.992 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:22.992 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:22.992 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:22.992 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:22.992 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:22.992 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.992 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.992 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:22.992 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:22.992 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.992 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:22.992 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:22.992 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2130390 00:26:22.992 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:22.992 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:22.992 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:26:22.992 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:22.992 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:26:22.992 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:22.992 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:22.992 rmmod nvme_tcp 00:26:22.992 rmmod nvme_fabrics 00:26:22.993 rmmod nvme_keyring 00:26:22.993 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:22.993 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:26:22.993 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:26:22.993 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2130236 ']' 00:26:22.993 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2130236 00:26:22.993 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 2130236 ']' 00:26:22.993 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 2130236 00:26:22.993 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:26:22.993 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:22.993 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2130236 00:26:22.993 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:22.993 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:22.993 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2130236' 00:26:22.993 killing process with pid 2130236 00:26:22.993 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 2130236 00:26:22.993 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 2130236 00:26:23.252 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:23.252 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:23.252 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:23.252 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:23.252 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:23.252 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.252 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.252 20:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:25.788 00:26:25.788 real 0m16.605s 00:26:25.788 user 0m24.404s 00:26:25.788 sys 0m3.998s 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.788 ************************************ 00:26:25.788 END TEST nvmf_host_discovery 00:26:25.788 ************************************ 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.788 ************************************ 00:26:25.788 START TEST nvmf_host_multipath_status 00:26:25.788 ************************************ 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:25.788 * Looking for test storage... 00:26:25.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:25.788 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:25.789 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:25.789 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:25.789 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:25.789 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:25.789 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:25.789 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.789 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:25.789 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:25.789 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:25.789 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.789 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:25.789 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.789 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:25.789 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:25.789 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:26:25.789 20:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:28.324 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.324 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:28.325 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:28.325 Found net devices under 0000:84:00.0: cvl_0_0 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:28.325 Found net devices under 0000:84:00.1: cvl_0_1 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:28.325 20:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:28.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:26:28.325 00:26:28.325 --- 10.0.0.2 ping statistics --- 00:26:28.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.325 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:28.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:26:28.325 00:26:28.325 --- 10.0.0.1 ping statistics --- 00:26:28.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.325 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2133757 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2133757 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2133757 ']' 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:28.325 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:28.584 [2024-07-24 20:20:32.161622] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:26:28.584 [2024-07-24 20:20:32.161721] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.584 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.584 [2024-07-24 20:20:32.261591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:28.843 [2024-07-24 20:20:32.430612] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.843 [2024-07-24 20:20:32.430692] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.843 [2024-07-24 20:20:32.430719] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.843 [2024-07-24 20:20:32.430740] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.843 [2024-07-24 20:20:32.430759] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.843 [2024-07-24 20:20:32.430861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.843 [2024-07-24 20:20:32.430869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.843 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:28.843 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:26:28.843 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:28.843 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:28.843 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:28.843 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.843 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2133757 00:26:28.843 20:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:29.410 [2024-07-24 20:20:33.172111] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.668 20:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:29.927 Malloc0 00:26:29.927 20:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:30.492 20:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:30.750 20:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:31.316 [2024-07-24 20:20:34.809801] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:31.316 20:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:31.882 [2024-07-24 20:20:35.423618] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:31.882 20:20:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:31.882 20:20:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2134125 00:26:31.882 20:20:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:31.882 20:20:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2134125 /var/tmp/bdevperf.sock 00:26:31.882 20:20:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2134125 ']' 00:26:31.882 20:20:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:31.882 20:20:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:31.882 20:20:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:31.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:31.882 20:20:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:31.882 20:20:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:32.141 20:20:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:32.141 20:20:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:26:32.141 20:20:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:32.707 20:20:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:33.640 Nvme0n1 00:26:33.640 20:20:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:33.898 Nvme0n1 00:26:33.898 20:20:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:33.898 20:20:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:36.429 20:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:36.429 20:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:36.429 20:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:36.687 20:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:37.622 20:20:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:37.622 20:20:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:37.622 20:20:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.622 20:20:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:38.222 20:20:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.222 20:20:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:38.222 20:20:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.222 20:20:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:38.493 20:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:38.493 20:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:38.493 20:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.493 20:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:38.752 20:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.752 20:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:38.752 20:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.752 20:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:39.318 20:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.318 20:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:39.318 20:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.318 20:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:39.577 20:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.577 20:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:39.577 20:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.577 20:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:39.835 20:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.835 20:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:39.835 20:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:40.094 20:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:40.661 20:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:41.597 20:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:41.597 20:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:41.597 20:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.597 20:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:42.164 20:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:42.164 20:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:42.164 20:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.164 20:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:42.423 20:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.424 20:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:42.424 20:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.424 20:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:42.991 20:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.991 20:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:42.991 20:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.991 20:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:43.249 20:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.249 20:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:43.249 20:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.249 20:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:43.508 20:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.508 20:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:43.508 20:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.508 20:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:44.075 20:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.075 20:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:44.075 20:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:44.334 20:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:44.902 20:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:45.838 20:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:45.838 20:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:45.838 20:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.838 20:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:46.098 20:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.098 20:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:46.098 20:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.098 20:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:46.666 20:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:46.666 20:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:46.666 20:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.666 20:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:46.925 20:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.925 20:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:46.925 20:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.925 20:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:47.184 20:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.184 20:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:47.184 20:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.184 20:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:47.751 20:20:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.751 20:20:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:47.751 20:20:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.751 20:20:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:48.010 20:20:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.010 20:20:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:48.010 20:20:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:48.268 20:20:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:48.836 20:20:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:49.778 20:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:49.778 20:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:49.778 20:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.778 20:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:50.036 20:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.036 20:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:50.036 20:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.036 20:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:50.603 20:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:50.603 20:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:50.603 20:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.603 20:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:50.862 20:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.862 20:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:50.862 20:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.862 20:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:51.119 20:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.119 20:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:51.119 20:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.119 20:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:51.378 20:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.378 20:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:51.378 20:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.378 20:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:51.635 20:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:51.635 20:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:51.635 20:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:52.201 20:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:52.459 20:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:53.407 20:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:53.407 20:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:53.407 20:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.407 20:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:53.985 20:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:53.985 20:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:53.985 20:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.985 20:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:54.551 20:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:54.551 20:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:54.551 20:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.551 20:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:55.117 20:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.117 20:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:55.117 20:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.117 20:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:55.375 20:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.375 20:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:55.375 20:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.375 20:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:55.633 20:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:55.633 20:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:55.633 20:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.633 20:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:55.891 20:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:55.891 20:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:55.891 20:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:56.457 20:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:56.715 20:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:57.650 20:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:57.650 20:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:57.650 20:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.650 20:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:58.217 20:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:58.217 20:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:58.217 20:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.217 20:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:58.475 20:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.475 20:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:58.475 20:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.475 20:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:58.733 20:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.733 20:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:58.733 20:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.733 20:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:59.300 20:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.300 20:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:59.300 20:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.300 20:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:59.559 20:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:59.559 20:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:59.559 20:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.559 20:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:59.816 20:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.816 20:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:00.382 20:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:00.383 20:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:00.641 20:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:00.899 20:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:01.835 20:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:01.835 20:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:01.835 20:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.835 20:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:02.401 20:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.402 20:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:02.402 20:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.402 20:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:02.661 20:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.661 20:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:02.661 20:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.661 20:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:03.229 20:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.229 20:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:03.229 20:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.229 20:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:03.487 20:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.487 20:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:03.487 20:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.487 20:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:03.746 20:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.746 20:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:03.746 20:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.746 20:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:04.313 20:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.313 20:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:04.313 20:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:04.572 20:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:04.831 20:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:06.207 20:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:06.208 20:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:06.208 20:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.208 20:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:06.208 20:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:06.208 20:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:06.208 20:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.208 20:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:06.775 20:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.775 20:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:06.775 20:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.775 20:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:07.033 20:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.033 20:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:07.033 20:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.033 20:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:07.291 20:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.291 20:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:07.291 20:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.291 20:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:07.549 20:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.549 20:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:07.549 20:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.549 20:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:08.115 20:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.115 20:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:08.115 20:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:08.388 20:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:08.964 20:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:09.898 20:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:09.898 20:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:09.898 20:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.898 20:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:10.155 20:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.155 20:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:10.155 20:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.155 20:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:10.721 20:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.721 20:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:10.721 20:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.721 20:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:10.980 20:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.980 20:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:10.980 20:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:10.980 20:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.547 20:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.547 20:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:11.547 20:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.547 20:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:11.805 20:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.805 20:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:11.805 20:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.805 20:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:12.063 20:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.063 20:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:12.063 20:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:12.630 20:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:12.887 20:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:13.821 20:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:13.821 20:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:13.821 20:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.821 20:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:14.387 20:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:14.387 20:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:14.387 20:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.387 20:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:14.644 20:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:14.644 20:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:14.644 20:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.644 20:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:14.902 20:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:14.902 20:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:14.902 20:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.902 20:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:15.160 20:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.160 20:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:15.160 20:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.160 20:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:15.419 20:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.419 20:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:15.419 20:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.419 20:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:15.680 20:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:15.680 20:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2134125 00:27:15.680 20:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2134125 ']' 00:27:15.680 20:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2134125 00:27:15.680 20:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:27:15.680 20:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:15.680 20:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2134125 00:27:15.680 20:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:27:15.680 20:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:27:15.680 20:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2134125' 00:27:15.680 killing process with pid 2134125 00:27:15.680 20:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2134125 00:27:15.680 20:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2134125 00:27:15.938 Connection closed with partial response: 00:27:15.938 00:27:15.938 00:27:16.206 20:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2134125 00:27:16.206 20:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:16.206 [2024-07-24 20:20:35.506091] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:27:16.206 [2024-07-24 20:20:35.506221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2134125 ] 00:27:16.206 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.206 [2024-07-24 20:20:35.591759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.206 [2024-07-24 20:20:35.732820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:16.206 Running I/O for 90 seconds... 00:27:16.206 [2024-07-24 20:20:55.714160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.206 [2024-07-24 20:20:55.714235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:16.206 [2024-07-24 20:20:55.714325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.206 [2024-07-24 20:20:55.714357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:16.206 [2024-07-24 20:20:55.714390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.206 [2024-07-24 20:20:55.714414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:16.206 [2024-07-24 20:20:55.714452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.206 [2024-07-24 20:20:55.714477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:16.206 [2024-07-24 20:20:55.714510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.206 [2024-07-24 20:20:55.714533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:16.206 [2024-07-24 20:20:55.714564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.206 [2024-07-24 20:20:55.714587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:16.206 [2024-07-24 20:20:55.714618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.206 [2024-07-24 20:20:55.714641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:16.206 [2024-07-24 20:20:55.714672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.206 [2024-07-24 20:20:55.714694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:16.206 [2024-07-24 20:20:55.714726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.206 [2024-07-24 20:20:55.714748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.206 [2024-07-24 20:20:55.714778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.206 [2024-07-24 20:20:55.714801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.206 [2024-07-24 20:20:55.714832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.206 [2024-07-24 20:20:55.714874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:16.206 [2024-07-24 20:20:55.714906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.206 [2024-07-24 20:20:55.714929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:16.206 [2024-07-24 20:20:55.714958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.206 [2024-07-24 20:20:55.714980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:16.206 [2024-07-24 20:20:55.715010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.206 [2024-07-24 20:20:55.715031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:16.206 [2024-07-24 20:20:55.715061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.206 [2024-07-24 20:20:55.715083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:16.206 [2024-07-24 20:20:55.715114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.206 [2024-07-24 20:20:55.715136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:16.206 [2024-07-24 20:20:55.715165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.206 [2024-07-24 20:20:55.715187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:16.206 [2024-07-24 20:20:55.715217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.206 [2024-07-24 20:20:55.715240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:16.206 [2024-07-24 20:20:55.715268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.206 [2024-07-24 20:20:55.715289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:16.206 [2024-07-24 20:20:55.715318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.206 [2024-07-24 20:20:55.715340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:16.206 [2024-07-24 20:20:55.715370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.206 [2024-07-24 20:20:55.715392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.715422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.715453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.716031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.716064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.716111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.716137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.716170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.716193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.716225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.716247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.716279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.716302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.716334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.716359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.716391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.716414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.716457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.716482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.716514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.716536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.716567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.716590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.716622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.716644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.716677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.716700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.716731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.716754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.716792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.716816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.716849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.716871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.716903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.716925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.716957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.716979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.717011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.717034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.717066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.717088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.717120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.717142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.717173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.717196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.717238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.717261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.717292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.717315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.717347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.717369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.717401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.717423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.717463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.717494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.717527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.717549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.717582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.717604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.717637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.717659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.717690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.717712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.717743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.717765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.717798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.717820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.717851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.717874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.717905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.717927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.717959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.717992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.718025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.718047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.718079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.718102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:16.207 [2024-07-24 20:20:55.718133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.207 [2024-07-24 20:20:55.718161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.718193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.718216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.718248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.718271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.718313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.718335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.718367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.718390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.718421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.718453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.718488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.718512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.718544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.718567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.718599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.718621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.718653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.718675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.718707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.718730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.718762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.718784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.718816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.718839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.718877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.718901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.718933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.718955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.718987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.719009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.719041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.719063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.719095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.719118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.719149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.719172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.719204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.719226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.719258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.719280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.719312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.719334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.719366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.719388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.719420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.719453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.719487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.719510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.719543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.719571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.719606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.719629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.719858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.719888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.719932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.719957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.719996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.720019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.720058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.720081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.720120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.720143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.720182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.720211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.720250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.720273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.720312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.720334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.720373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.720397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.720454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.720480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.720520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.720553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.720595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.208 [2024-07-24 20:20:55.720629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:16.208 [2024-07-24 20:20:55.720670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.720693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.720732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.720756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.720795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.720818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.720857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.720879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.720918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.209 [2024-07-24 20:20:55.720941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.720980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.721004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.721043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.721065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.721104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.721128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.721166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.721189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.721228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.721251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.721290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.721313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.721358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.721382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.721421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.721472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.721514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.721538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.721576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.721598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.721635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.721659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.721696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.721718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.721754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.721776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.721814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.721836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.721873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.721895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.721933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.721955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.721992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.722015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.722052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.722073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.722116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.722139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.722177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.722198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.722236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.722257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.722294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.722317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.722354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.722376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.722413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.722443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:20:55.722483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.209 [2024-07-24 20:20:55.722507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:21:16.528711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.209 [2024-07-24 20:21:16.528785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:21:16.528837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.209 [2024-07-24 20:21:16.528863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:21:16.528896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.209 [2024-07-24 20:21:16.528928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:21:16.528959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.209 [2024-07-24 20:21:16.528981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:21:16.529012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.209 [2024-07-24 20:21:16.529035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:21:16.529066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.209 [2024-07-24 20:21:16.529101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:21:16.529134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.209 [2024-07-24 20:21:16.529158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:21:16.529188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.209 [2024-07-24 20:21:16.529213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.209 [2024-07-24 20:21:16.529242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.209 [2024-07-24 20:21:16.529265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.529295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.210 [2024-07-24 20:21:16.529317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.529347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.210 [2024-07-24 20:21:16.529369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.529398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.210 [2024-07-24 20:21:16.529421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.529462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.210 [2024-07-24 20:21:16.529486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.529516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.210 [2024-07-24 20:21:16.529539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.529570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.210 [2024-07-24 20:21:16.529592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.529622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.210 [2024-07-24 20:21:16.529645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.529675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.210 [2024-07-24 20:21:16.529698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.529729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.210 [2024-07-24 20:21:16.529758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.529790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.210 [2024-07-24 20:21:16.529814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.531161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.210 [2024-07-24 20:21:16.531195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.531229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.210 [2024-07-24 20:21:16.531253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.531284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.210 [2024-07-24 20:21:16.531307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.531941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.210 [2024-07-24 20:21:16.531975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.532012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.210 [2024-07-24 20:21:16.532038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.532070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.210 [2024-07-24 20:21:16.532093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.532124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.210 [2024-07-24 20:21:16.532147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.532176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.210 [2024-07-24 20:21:16.532199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.532229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.210 [2024-07-24 20:21:16.532252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.532281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.210 [2024-07-24 20:21:16.532304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.532334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.210 [2024-07-24 20:21:16.532358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.536404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.210 [2024-07-24 20:21:16.536450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.536489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.210 [2024-07-24 20:21:16.536515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.536549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.210 [2024-07-24 20:21:16.536572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.536602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.210 [2024-07-24 20:21:16.536625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.536655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.210 [2024-07-24 20:21:16.536677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.536708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.210 [2024-07-24 20:21:16.536731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.536761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.210 [2024-07-24 20:21:16.536784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.536814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.210 [2024-07-24 20:21:16.536837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.536868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.210 [2024-07-24 20:21:16.536890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.536921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.210 [2024-07-24 20:21:16.536943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.210 [2024-07-24 20:21:16.536973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.210 [2024-07-24 20:21:16.536996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.537026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.537049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.537087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.537111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.537141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.537166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.537195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.537218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.537248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.537271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.537301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.537324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.537354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.537378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.537407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.537440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.537473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.537497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.537527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.537550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.537581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.537605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.537636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.211 [2024-07-24 20:21:16.537659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.537689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.211 [2024-07-24 20:21:16.537712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.537743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.211 [2024-07-24 20:21:16.537773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.537804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.211 [2024-07-24 20:21:16.537827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.537856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.211 [2024-07-24 20:21:16.537879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.537909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.211 [2024-07-24 20:21:16.537932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.537962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.211 [2024-07-24 20:21:16.537984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.538014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.211 [2024-07-24 20:21:16.538037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.538067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.211 [2024-07-24 20:21:16.538089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.538119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.538142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.538172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.538194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.538223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.538245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.538274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.538298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.538327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.538348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.538378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.538405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.538445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.538470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.538500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.538522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.538552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.538573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.538603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.538627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.538658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.538680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.538711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.538733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.538763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.538786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.538815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.538838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.538868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.538891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.538921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.538943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.538972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.538995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.539024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.211 [2024-07-24 20:21:16.539046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:16.211 [2024-07-24 20:21:16.539082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.212 [2024-07-24 20:21:16.539107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.539137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.212 [2024-07-24 20:21:16.539159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.539188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.212 [2024-07-24 20:21:16.539210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.539240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.212 [2024-07-24 20:21:16.539262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.541630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.212 [2024-07-24 20:21:16.541664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.541701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.212 [2024-07-24 20:21:16.541727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.541757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.212 [2024-07-24 20:21:16.541780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.541811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.212 [2024-07-24 20:21:16.541833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.541864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.212 [2024-07-24 20:21:16.541886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.541916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.212 [2024-07-24 20:21:16.541939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.541969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.212 [2024-07-24 20:21:16.541992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.542021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.212 [2024-07-24 20:21:16.542043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.542081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.212 [2024-07-24 20:21:16.542105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.542135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.212 [2024-07-24 20:21:16.542158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.542188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.212 [2024-07-24 20:21:16.542210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.542240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.212 [2024-07-24 20:21:16.542263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.542293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.212 [2024-07-24 20:21:16.542316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.542345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.212 [2024-07-24 20:21:16.542367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.542397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.212 [2024-07-24 20:21:16.542419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.542461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.212 [2024-07-24 20:21:16.542485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.542514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.212 [2024-07-24 20:21:16.542536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.542566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.212 [2024-07-24 20:21:16.542588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.542618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.212 [2024-07-24 20:21:16.542640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.542669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.212 [2024-07-24 20:21:16.542691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.542720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.212 [2024-07-24 20:21:16.542749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.542780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.212 [2024-07-24 20:21:16.542814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.542843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.212 [2024-07-24 20:21:16.542865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.542895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.212 [2024-07-24 20:21:16.542918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.543291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.212 [2024-07-24 20:21:16.543322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.543357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.212 [2024-07-24 20:21:16.543380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.543411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.212 [2024-07-24 20:21:16.543443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.543476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.212 [2024-07-24 20:21:16.543498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.543528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.212 [2024-07-24 20:21:16.543551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.543581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.212 [2024-07-24 20:21:16.543604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.543633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.212 [2024-07-24 20:21:16.543660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.543690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.212 [2024-07-24 20:21:16.543712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.543742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.212 [2024-07-24 20:21:16.543771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.543802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.212 [2024-07-24 20:21:16.543824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.543854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.212 [2024-07-24 20:21:16.543877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:16.212 [2024-07-24 20:21:16.543906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.213 [2024-07-24 20:21:16.543928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.543958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.213 [2024-07-24 20:21:16.543980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.544010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.213 [2024-07-24 20:21:16.544038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.544067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.213 [2024-07-24 20:21:16.544089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.544119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.213 [2024-07-24 20:21:16.544141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.544170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.213 [2024-07-24 20:21:16.544192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.544221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.213 [2024-07-24 20:21:16.544243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.544272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.213 [2024-07-24 20:21:16.544294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.544324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.213 [2024-07-24 20:21:16.544347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.545095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.213 [2024-07-24 20:21:16.545129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.545172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.213 [2024-07-24 20:21:16.545199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.545230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.213 [2024-07-24 20:21:16.545253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.545283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.213 [2024-07-24 20:21:16.545305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.545335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.213 [2024-07-24 20:21:16.545357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.545387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.213 [2024-07-24 20:21:16.545409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.545451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.213 [2024-07-24 20:21:16.545476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.545506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.213 [2024-07-24 20:21:16.545529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.545558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.213 [2024-07-24 20:21:16.545581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.545611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.213 [2024-07-24 20:21:16.545633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.545663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.213 [2024-07-24 20:21:16.545686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.545715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.213 [2024-07-24 20:21:16.545737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.545766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.213 [2024-07-24 20:21:16.545788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.545833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.213 [2024-07-24 20:21:16.545857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.545886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.213 [2024-07-24 20:21:16.545908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.545938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.213 [2024-07-24 20:21:16.545960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.545990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.213 [2024-07-24 20:21:16.546014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.546044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.213 [2024-07-24 20:21:16.546066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.546095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.213 [2024-07-24 20:21:16.546118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.546147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.213 [2024-07-24 20:21:16.546169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.546198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.213 [2024-07-24 20:21:16.546228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.546257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.213 [2024-07-24 20:21:16.546280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.546309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.213 [2024-07-24 20:21:16.546331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.546360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.213 [2024-07-24 20:21:16.546382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.546412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.213 [2024-07-24 20:21:16.546443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.546475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.213 [2024-07-24 20:21:16.546506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.546537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.213 [2024-07-24 20:21:16.546560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.546589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.213 [2024-07-24 20:21:16.546610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.546640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.213 [2024-07-24 20:21:16.546661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.546691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.213 [2024-07-24 20:21:16.546714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:16.213 [2024-07-24 20:21:16.546744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.214 [2024-07-24 20:21:16.546767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.546796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.214 [2024-07-24 20:21:16.546819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.548940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.214 [2024-07-24 20:21:16.548973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.549021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.214 [2024-07-24 20:21:16.549045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.549075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.214 [2024-07-24 20:21:16.549099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.549129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.214 [2024-07-24 20:21:16.549151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.549180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.214 [2024-07-24 20:21:16.549203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.549240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.214 [2024-07-24 20:21:16.549269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.549300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.214 [2024-07-24 20:21:16.549322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.549352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.214 [2024-07-24 20:21:16.549374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.549403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.214 [2024-07-24 20:21:16.549425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.549466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.214 [2024-07-24 20:21:16.549489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.549519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.214 [2024-07-24 20:21:16.549542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.549571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.214 [2024-07-24 20:21:16.549594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.549623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.214 [2024-07-24 20:21:16.549645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.549674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.214 [2024-07-24 20:21:16.549696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.549726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.214 [2024-07-24 20:21:16.549747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.549777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.214 [2024-07-24 20:21:16.549799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.549829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.214 [2024-07-24 20:21:16.549851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.549880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.214 [2024-07-24 20:21:16.549902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.549938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.214 [2024-07-24 20:21:16.549968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.549997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.214 [2024-07-24 20:21:16.550020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.550049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.214 [2024-07-24 20:21:16.550071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.550101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.214 [2024-07-24 20:21:16.550123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.550153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.214 [2024-07-24 20:21:16.550176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.550205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.214 [2024-07-24 20:21:16.550227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.550257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.214 [2024-07-24 20:21:16.550279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.550309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.214 [2024-07-24 20:21:16.550331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.550361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.214 [2024-07-24 20:21:16.550383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.550414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.214 [2024-07-24 20:21:16.550445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.553736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.214 [2024-07-24 20:21:16.553773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.553811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.214 [2024-07-24 20:21:16.553836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.553880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.214 [2024-07-24 20:21:16.553905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.553934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.214 [2024-07-24 20:21:16.553957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.553987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.214 [2024-07-24 20:21:16.554020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.554050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.214 [2024-07-24 20:21:16.554072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.554102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.214 [2024-07-24 20:21:16.554124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.554154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.214 [2024-07-24 20:21:16.554176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:16.214 [2024-07-24 20:21:16.554207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.214 [2024-07-24 20:21:16.554229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.554259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.215 [2024-07-24 20:21:16.554281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.554311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.215 [2024-07-24 20:21:16.554333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.554362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.215 [2024-07-24 20:21:16.554384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.554416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.215 [2024-07-24 20:21:16.554451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.554493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.215 [2024-07-24 20:21:16.554517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.554547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.215 [2024-07-24 20:21:16.554575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.554606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.215 [2024-07-24 20:21:16.554629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.554660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.215 [2024-07-24 20:21:16.554683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.554713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.215 [2024-07-24 20:21:16.554736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.554766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.215 [2024-07-24 20:21:16.554788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.554818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.215 [2024-07-24 20:21:16.554840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.554870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.215 [2024-07-24 20:21:16.554892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.554921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.215 [2024-07-24 20:21:16.554946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.554976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.215 [2024-07-24 20:21:16.554998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.555027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.215 [2024-07-24 20:21:16.555049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.555079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.215 [2024-07-24 20:21:16.555102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.555132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.215 [2024-07-24 20:21:16.555155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.555184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.215 [2024-07-24 20:21:16.555212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.555243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.215 [2024-07-24 20:21:16.555265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.555294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.215 [2024-07-24 20:21:16.555317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.555347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.215 [2024-07-24 20:21:16.555370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.555400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.215 [2024-07-24 20:21:16.555422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.555465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.215 [2024-07-24 20:21:16.555488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.555519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.215 [2024-07-24 20:21:16.555542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.555572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.215 [2024-07-24 20:21:16.555594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.555624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.215 [2024-07-24 20:21:16.555647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.555678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.215 [2024-07-24 20:21:16.555702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.555731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.215 [2024-07-24 20:21:16.555753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.555783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.215 [2024-07-24 20:21:16.555806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.555836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.215 [2024-07-24 20:21:16.555858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.557306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.215 [2024-07-24 20:21:16.557340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.557376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.215 [2024-07-24 20:21:16.557401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.215 [2024-07-24 20:21:16.557440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.215 [2024-07-24 20:21:16.557469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.557499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.216 [2024-07-24 20:21:16.557522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.557553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.216 [2024-07-24 20:21:16.557575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.557605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.216 [2024-07-24 20:21:16.557627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.557657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.216 [2024-07-24 20:21:16.557679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.557709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.216 [2024-07-24 20:21:16.557731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.557770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.216 [2024-07-24 20:21:16.557792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.557821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.216 [2024-07-24 20:21:16.557855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.557884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.216 [2024-07-24 20:21:16.557907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.557936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.216 [2024-07-24 20:21:16.557957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.557994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.216 [2024-07-24 20:21:16.558018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.558048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.216 [2024-07-24 20:21:16.558071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.558100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.216 [2024-07-24 20:21:16.558122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.558153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.216 [2024-07-24 20:21:16.558176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.559013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.216 [2024-07-24 20:21:16.559048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.559086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.216 [2024-07-24 20:21:16.559111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.559142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.216 [2024-07-24 20:21:16.559165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.559195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.216 [2024-07-24 20:21:16.559218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.559254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.216 [2024-07-24 20:21:16.559276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.559307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.216 [2024-07-24 20:21:16.559329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.559359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.216 [2024-07-24 20:21:16.559382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.559411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.216 [2024-07-24 20:21:16.559444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.559484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.216 [2024-07-24 20:21:16.559508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.559538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.216 [2024-07-24 20:21:16.559560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.559590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.216 [2024-07-24 20:21:16.559613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.559644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.216 [2024-07-24 20:21:16.559666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.559696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.216 [2024-07-24 20:21:16.559718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.559748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.216 [2024-07-24 20:21:16.559770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.559800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.216 [2024-07-24 20:21:16.559823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.559852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.216 [2024-07-24 20:21:16.559875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.559905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.216 [2024-07-24 20:21:16.559927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.559956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.216 [2024-07-24 20:21:16.559978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.560008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.216 [2024-07-24 20:21:16.560031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.560061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.216 [2024-07-24 20:21:16.560082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.560112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.216 [2024-07-24 20:21:16.560140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.561262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.216 [2024-07-24 20:21:16.561297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.561338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.216 [2024-07-24 20:21:16.561362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.561394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.216 [2024-07-24 20:21:16.561417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.561461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.216 [2024-07-24 20:21:16.561495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:16.216 [2024-07-24 20:21:16.561525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.216 [2024-07-24 20:21:16.561547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.561577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.217 [2024-07-24 20:21:16.561599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.561629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.217 [2024-07-24 20:21:16.561652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.561681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.217 [2024-07-24 20:21:16.561704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.561734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.217 [2024-07-24 20:21:16.561765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.561795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.217 [2024-07-24 20:21:16.561818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.561847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.217 [2024-07-24 20:21:16.561870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.561900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.217 [2024-07-24 20:21:16.561939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.561971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.217 [2024-07-24 20:21:16.561993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.562023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.217 [2024-07-24 20:21:16.562045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.562076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.217 [2024-07-24 20:21:16.562099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.563911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.217 [2024-07-24 20:21:16.563947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.563984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.217 [2024-07-24 20:21:16.564010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.564040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.217 [2024-07-24 20:21:16.564062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.564092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.217 [2024-07-24 20:21:16.564116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.564145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.217 [2024-07-24 20:21:16.564167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.564197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.217 [2024-07-24 20:21:16.564227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.564257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.217 [2024-07-24 20:21:16.564279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.564310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.217 [2024-07-24 20:21:16.564332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.564361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.217 [2024-07-24 20:21:16.564384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.564420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.217 [2024-07-24 20:21:16.564455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.564487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.217 [2024-07-24 20:21:16.564510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.564541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.217 [2024-07-24 20:21:16.564563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.564593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.217 [2024-07-24 20:21:16.564615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.564645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.217 [2024-07-24 20:21:16.564668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.564701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.217 [2024-07-24 20:21:16.564723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.564753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.217 [2024-07-24 20:21:16.564776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.564807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.217 [2024-07-24 20:21:16.564829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.564861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.217 [2024-07-24 20:21:16.564883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.564913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.217 [2024-07-24 20:21:16.564935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.564964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.217 [2024-07-24 20:21:16.564986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.565016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.217 [2024-07-24 20:21:16.565038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.565074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.217 [2024-07-24 20:21:16.565098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.565128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.217 [2024-07-24 20:21:16.565150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.565180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.217 [2024-07-24 20:21:16.565202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.565233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.217 [2024-07-24 20:21:16.565256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.565285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.217 [2024-07-24 20:21:16.565308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.565337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.217 [2024-07-24 20:21:16.565360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.565389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.217 [2024-07-24 20:21:16.565411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:16.217 [2024-07-24 20:21:16.565451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.218 [2024-07-24 20:21:16.565476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.565506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.218 [2024-07-24 20:21:16.565529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.565559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.218 [2024-07-24 20:21:16.565582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.567849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.218 [2024-07-24 20:21:16.567885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.567922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.218 [2024-07-24 20:21:16.567947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.567976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.218 [2024-07-24 20:21:16.568017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.568048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.218 [2024-07-24 20:21:16.568072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.568102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.218 [2024-07-24 20:21:16.568124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.568154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.218 [2024-07-24 20:21:16.568177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.568207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.218 [2024-07-24 20:21:16.568229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.568259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.218 [2024-07-24 20:21:16.568281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.568311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.218 [2024-07-24 20:21:16.568333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.568364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.218 [2024-07-24 20:21:16.568386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.568416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.218 [2024-07-24 20:21:16.568448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.568489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.218 [2024-07-24 20:21:16.568512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.568542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.218 [2024-07-24 20:21:16.568565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.568594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.218 [2024-07-24 20:21:16.568617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.568647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.218 [2024-07-24 20:21:16.568675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.568706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.218 [2024-07-24 20:21:16.568728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.568759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.218 [2024-07-24 20:21:16.568782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.568811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.218 [2024-07-24 20:21:16.568833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.568863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.218 [2024-07-24 20:21:16.568885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.568915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.218 [2024-07-24 20:21:16.568937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.568977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.218 [2024-07-24 20:21:16.568999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.569030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.218 [2024-07-24 20:21:16.569052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.569082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.218 [2024-07-24 20:21:16.569104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.569134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.218 [2024-07-24 20:21:16.569158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.569188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.218 [2024-07-24 20:21:16.569210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.569240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.218 [2024-07-24 20:21:16.569262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.569292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.218 [2024-07-24 20:21:16.569315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.569354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.218 [2024-07-24 20:21:16.569377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.569408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.218 [2024-07-24 20:21:16.569440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.569476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.218 [2024-07-24 20:21:16.569499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.571083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.218 [2024-07-24 20:21:16.571116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.571153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.218 [2024-07-24 20:21:16.571178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.571209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.218 [2024-07-24 20:21:16.571233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.571262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.218 [2024-07-24 20:21:16.571285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.571315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.218 [2024-07-24 20:21:16.571337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.571367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.218 [2024-07-24 20:21:16.571389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:16.218 [2024-07-24 20:21:16.571418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.219 [2024-07-24 20:21:16.571451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.571483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.219 [2024-07-24 20:21:16.571506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.571536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.219 [2024-07-24 20:21:16.571558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.571595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.219 [2024-07-24 20:21:16.571619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.571649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.219 [2024-07-24 20:21:16.571672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.571701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.219 [2024-07-24 20:21:16.571723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.571753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.219 [2024-07-24 20:21:16.571776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.571806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.219 [2024-07-24 20:21:16.571828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.572897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.219 [2024-07-24 20:21:16.572932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.572970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.219 [2024-07-24 20:21:16.572995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.573026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.219 [2024-07-24 20:21:16.573049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.573078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.219 [2024-07-24 20:21:16.573105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.573134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.219 [2024-07-24 20:21:16.573156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.573190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.219 [2024-07-24 20:21:16.573212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.573242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.219 [2024-07-24 20:21:16.573265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.573295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.219 [2024-07-24 20:21:16.573323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.573355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.219 [2024-07-24 20:21:16.573378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.573408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.219 [2024-07-24 20:21:16.573439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.573471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.219 [2024-07-24 20:21:16.573496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.573527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.219 [2024-07-24 20:21:16.573549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.573580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.219 [2024-07-24 20:21:16.573602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.573642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.219 [2024-07-24 20:21:16.573664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.573694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.219 [2024-07-24 20:21:16.573715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.573745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.219 [2024-07-24 20:21:16.573767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.573796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.219 [2024-07-24 20:21:16.573819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.573848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.219 [2024-07-24 20:21:16.573870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.573900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.219 [2024-07-24 20:21:16.573923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.574940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.219 [2024-07-24 20:21:16.574980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.575018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.219 [2024-07-24 20:21:16.575043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.575073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.219 [2024-07-24 20:21:16.575096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.575127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.219 [2024-07-24 20:21:16.575150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.575181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.219 [2024-07-24 20:21:16.575203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.575233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.219 [2024-07-24 20:21:16.575255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.575285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.219 [2024-07-24 20:21:16.575307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:16.219 [2024-07-24 20:21:16.575337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.220 [2024-07-24 20:21:16.575358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.575388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.220 [2024-07-24 20:21:16.575410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.575460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.575485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.575516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.575539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.575568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.575590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.575620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.575643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.575687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.220 [2024-07-24 20:21:16.575711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.575740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.220 [2024-07-24 20:21:16.575763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.575792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.220 [2024-07-24 20:21:16.575815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.575845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.575867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.575897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.575931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.577438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.577473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.577511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.577536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.577566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.577589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.577618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.220 [2024-07-24 20:21:16.577641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.577671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.577693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.577723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.577745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.577775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.577797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.577834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.577858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.577888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.577911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.577940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.577962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.577992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.220 [2024-07-24 20:21:16.578014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.578044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.220 [2024-07-24 20:21:16.578066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.578096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.220 [2024-07-24 20:21:16.578118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.578147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.578169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.578199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.578222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.578251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.578274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.578304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.578326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.578356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.220 [2024-07-24 20:21:16.578378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.578408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.578438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.578472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.578501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.578533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.578556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.578586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.220 [2024-07-24 20:21:16.578609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.578639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.220 [2024-07-24 20:21:16.578662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.578691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.578713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.578744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.578767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.578798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.220 [2024-07-24 20:21:16.578820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.578850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.220 [2024-07-24 20:21:16.578872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:16.220 [2024-07-24 20:21:16.578902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.220 [2024-07-24 20:21:16.578925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.578955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.221 [2024-07-24 20:21:16.578978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.579007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.221 [2024-07-24 20:21:16.579030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.579060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.221 [2024-07-24 20:21:16.579082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.579111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.221 [2024-07-24 20:21:16.579140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.579171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.221 [2024-07-24 20:21:16.579194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.579225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.221 [2024-07-24 20:21:16.579248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.582033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.221 [2024-07-24 20:21:16.582068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.582105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.221 [2024-07-24 20:21:16.582130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.582161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.221 [2024-07-24 20:21:16.582187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.582218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.221 [2024-07-24 20:21:16.582251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.582281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.221 [2024-07-24 20:21:16.582304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.582339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.221 [2024-07-24 20:21:16.582361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.582391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.221 [2024-07-24 20:21:16.582414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.582456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.221 [2024-07-24 20:21:16.582481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.582511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.221 [2024-07-24 20:21:16.582535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.582566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.221 [2024-07-24 20:21:16.582588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.582626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.221 [2024-07-24 20:21:16.582649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.582680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.221 [2024-07-24 20:21:16.582702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.582732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.221 [2024-07-24 20:21:16.582755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.582786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.221 [2024-07-24 20:21:16.582809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.582847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.221 [2024-07-24 20:21:16.582870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.582900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.221 [2024-07-24 20:21:16.582923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.582952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.221 [2024-07-24 20:21:16.582975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.583004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.221 [2024-07-24 20:21:16.583027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.583057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.221 [2024-07-24 20:21:16.583080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.583110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.221 [2024-07-24 20:21:16.583133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.583163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.221 [2024-07-24 20:21:16.583187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.583217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.221 [2024-07-24 20:21:16.583240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.583276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.221 [2024-07-24 20:21:16.583299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.583330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.221 [2024-07-24 20:21:16.583353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.583383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.221 [2024-07-24 20:21:16.583405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.583444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.221 [2024-07-24 20:21:16.583469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.583500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.221 [2024-07-24 20:21:16.583523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.583552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.221 [2024-07-24 20:21:16.583575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.583605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.221 [2024-07-24 20:21:16.583628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.583658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.221 [2024-07-24 20:21:16.583681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.584397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.221 [2024-07-24 20:21:16.584438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:16.221 [2024-07-24 20:21:16.584479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.222 [2024-07-24 20:21:16.584504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.584535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.222 [2024-07-24 20:21:16.584557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.584588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.222 [2024-07-24 20:21:16.584610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.584640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.222 [2024-07-24 20:21:16.584679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.584710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.222 [2024-07-24 20:21:16.584733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.584763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.222 [2024-07-24 20:21:16.584785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.584815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.222 [2024-07-24 20:21:16.584838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.584868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.222 [2024-07-24 20:21:16.584891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.586680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.222 [2024-07-24 20:21:16.586716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.586773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.222 [2024-07-24 20:21:16.586803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.586834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.222 [2024-07-24 20:21:16.586857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.586888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.222 [2024-07-24 20:21:16.586910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.586940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.222 [2024-07-24 20:21:16.586962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.586992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.222 [2024-07-24 20:21:16.587016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.587046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.222 [2024-07-24 20:21:16.587067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.587097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.222 [2024-07-24 20:21:16.587126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.587158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.222 [2024-07-24 20:21:16.587180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.587210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.222 [2024-07-24 20:21:16.587233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.587262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.222 [2024-07-24 20:21:16.587284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.587315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.222 [2024-07-24 20:21:16.587337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.587367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.222 [2024-07-24 20:21:16.587389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.587419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.222 [2024-07-24 20:21:16.587454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.587487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.222 [2024-07-24 20:21:16.587511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.587541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.222 [2024-07-24 20:21:16.587563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.587594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.222 [2024-07-24 20:21:16.587617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.587647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.222 [2024-07-24 20:21:16.587669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.587699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.222 [2024-07-24 20:21:16.587721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.587751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.222 [2024-07-24 20:21:16.587773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.587810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.222 [2024-07-24 20:21:16.587833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.587864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.222 [2024-07-24 20:21:16.587887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.587917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.222 [2024-07-24 20:21:16.587939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:16.222 [2024-07-24 20:21:16.587970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.222 [2024-07-24 20:21:16.587992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:16.222 Received shutdown signal, test time was about 41.607265 seconds 00:27:16.222 00:27:16.222 Latency(us) 00:27:16.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.222 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:16.222 Verification LBA range: start 0x0 length 0x4000 00:27:16.222 Nvme0n1 : 41.61 6002.12 23.45 0.00 0.00 21266.84 1881.13 5020737.23 00:27:16.222 =================================================================================================================== 00:27:16.222 Total : 6002.12 23.45 0.00 0.00 21266.84 1881.13 5020737.23 00:27:16.222 20:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:16.481 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:16.481 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:16.481 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:16.481 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:16.481 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:27:16.481 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:16.481 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:27:16.481 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:16.481 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:16.481 rmmod nvme_tcp 00:27:16.481 rmmod nvme_fabrics 00:27:16.481 rmmod nvme_keyring 00:27:16.481 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:16.481 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:27:16.481 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:27:16.481 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2133757 ']' 00:27:16.481 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2133757 00:27:16.481 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2133757 ']' 00:27:16.481 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2133757 00:27:16.481 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:27:16.481 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:16.481 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2133757 00:27:16.481 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:16.481 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:16.481 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2133757' 00:27:16.481 killing process with pid 2133757 00:27:16.481 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2133757 00:27:16.739 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2133757 00:27:16.998 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:16.998 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:16.998 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:16.998 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:16.998 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:16.998 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.998 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:16.998 20:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:19.545 00:27:19.545 real 0m53.626s 00:27:19.545 user 2m45.681s 00:27:19.545 sys 0m14.170s 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:19.545 ************************************ 00:27:19.545 END TEST nvmf_host_multipath_status 00:27:19.545 ************************************ 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.545 ************************************ 00:27:19.545 START TEST nvmf_discovery_remove_ifc 00:27:19.545 ************************************ 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:19.545 * Looking for test storage... 00:27:19.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.545 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:27:19.546 20:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:22.075 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:22.075 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:22.075 Found net devices under 0000:84:00.0: cvl_0_0 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:22.075 Found net devices under 0000:84:00.1: cvl_0_1 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:22.075 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:22.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:22.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:27:22.334 00:27:22.334 --- 10.0.0.2 ping statistics --- 00:27:22.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.334 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:22.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:22.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:27:22.334 00:27:22.334 --- 10.0.0.1 ping statistics --- 00:27:22.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.334 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2142119 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2142119 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2142119 ']' 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:22.334 20:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.334 [2024-07-24 20:21:26.035535] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:27:22.334 [2024-07-24 20:21:26.035635] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:22.334 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.592 [2024-07-24 20:21:26.129294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.592 [2024-07-24 20:21:26.268890] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:22.592 [2024-07-24 20:21:26.268964] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:22.592 [2024-07-24 20:21:26.268984] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:22.592 [2024-07-24 20:21:26.269001] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:22.592 [2024-07-24 20:21:26.269016] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:22.592 [2024-07-24 20:21:26.269053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.850 20:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:22.850 20:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:27:22.850 20:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:22.850 20:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:22.850 20:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.850 20:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.850 20:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:22.850 20:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.850 20:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.850 [2024-07-24 20:21:26.454437] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.850 [2024-07-24 20:21:26.462650] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:22.850 null0 00:27:22.850 [2024-07-24 20:21:26.494584] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.850 20:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.850 20:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2142261 00:27:22.850 20:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:22.850 20:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2142261 /tmp/host.sock 00:27:22.850 20:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2142261 ']' 00:27:22.850 20:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:27:22.850 20:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:22.850 20:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:22.850 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:22.850 20:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:22.850 20:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.850 [2024-07-24 20:21:26.577247] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:27:22.850 [2024-07-24 20:21:26.577345] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2142261 ] 00:27:22.850 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.108 [2024-07-24 20:21:26.660763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.108 [2024-07-24 20:21:26.802790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.366 20:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:23.366 20:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:27:23.366 20:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:23.366 20:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:23.366 20:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.366 20:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:23.366 20:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.366 20:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:23.366 20:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.366 20:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:23.624 20:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.624 20:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:23.624 20:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.624 20:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:24.582 [2024-07-24 20:21:28.273868] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:24.582 [2024-07-24 20:21:28.273904] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:24.582 [2024-07-24 20:21:28.273935] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:24.583 [2024-07-24 20:21:28.360223] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:24.840 [2024-07-24 20:21:28.586129] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:24.840 [2024-07-24 20:21:28.586209] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:24.840 [2024-07-24 20:21:28.586265] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:24.840 [2024-07-24 20:21:28.586298] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:24.840 [2024-07-24 20:21:28.586336] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:24.840 20:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.840 20:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:24.840 20:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:24.840 20:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:24.840 20:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:24.840 20:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.840 20:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:24.840 20:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:24.840 20:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:24.840 [2024-07-24 20:21:28.592222] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x11bae50 was disconnected and freed. delete nvme_qpair. 00:27:24.840 20:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.098 20:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:25.098 20:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:25.098 20:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:25.098 20:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:25.098 20:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:25.098 20:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.098 20:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:25.098 20:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.098 20:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:25.098 20:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:25.098 20:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:25.098 20:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.098 20:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:25.098 20:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:26.031 20:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:26.031 20:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.031 20:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:26.031 20:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.031 20:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:26.031 20:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:26.031 20:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:26.031 20:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.031 20:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:26.031 20:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:27.405 20:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:27.405 20:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:27.405 20:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:27.405 20:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.405 20:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:27.405 20:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:27.405 20:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:27.405 20:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.405 20:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:27.405 20:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:28.340 20:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:28.340 20:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.340 20:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:28.340 20:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.340 20:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.340 20:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:28.340 20:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:28.340 20:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.340 20:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:28.340 20:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:29.273 20:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:29.273 20:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:29.273 20:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:29.273 20:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.273 20:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:29.273 20:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:29.273 20:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:29.273 20:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.273 20:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:29.273 20:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:30.647 20:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:30.647 20:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:30.647 20:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:30.647 20:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.647 20:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:30.647 20:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:30.647 20:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:30.647 20:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.647 [2024-07-24 20:21:34.026597] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:30.647 [2024-07-24 20:21:34.026683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.647 [2024-07-24 20:21:34.026713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.647 [2024-07-24 20:21:34.026737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.647 [2024-07-24 20:21:34.026755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.647 [2024-07-24 20:21:34.026787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.647 [2024-07-24 20:21:34.026808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.647 [2024-07-24 20:21:34.026828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.647 [2024-07-24 20:21:34.026846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.647 [2024-07-24 20:21:34.026865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.647 [2024-07-24 20:21:34.026885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.647 [2024-07-24 20:21:34.026903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1181890 is same with the state(5) to be set 00:27:30.647 [2024-07-24 20:21:34.036613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1181890 (9): Bad file descriptor 00:27:30.647 [2024-07-24 20:21:34.046666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:30.647 20:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:30.647 20:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:31.580 20:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:31.580 20:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:31.580 20:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:31.580 20:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.580 20:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:31.580 20:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:31.580 20:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:31.580 [2024-07-24 20:21:35.079466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:31.580 [2024-07-24 20:21:35.079551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1181890 with addr=10.0.0.2, port=4420 00:27:31.581 [2024-07-24 20:21:35.079583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1181890 is same with the state(5) to be set 00:27:31.581 [2024-07-24 20:21:35.079638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1181890 (9): Bad file descriptor 00:27:31.581 [2024-07-24 20:21:35.080228] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:31.581 [2024-07-24 20:21:35.080292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.581 [2024-07-24 20:21:35.080318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:31.581 [2024-07-24 20:21:35.080340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.581 [2024-07-24 20:21:35.080380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.581 [2024-07-24 20:21:35.080405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.581 20:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.581 20:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:31.581 20:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:32.514 [2024-07-24 20:21:36.082949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:32.514 [2024-07-24 20:21:36.082996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:32.514 [2024-07-24 20:21:36.083017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:32.514 [2024-07-24 20:21:36.083035] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:27:32.514 [2024-07-24 20:21:36.083063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.514 [2024-07-24 20:21:36.083115] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:32.514 [2024-07-24 20:21:36.083167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:32.514 [2024-07-24 20:21:36.083197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.514 [2024-07-24 20:21:36.083223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:32.514 [2024-07-24 20:21:36.083241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.514 [2024-07-24 20:21:36.083260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:32.514 [2024-07-24 20:21:36.083277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.514 [2024-07-24 20:21:36.083296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:32.514 [2024-07-24 20:21:36.083314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.514 [2024-07-24 20:21:36.083332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:32.514 [2024-07-24 20:21:36.083350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.514 [2024-07-24 20:21:36.083367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:32.514 [2024-07-24 20:21:36.083486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1180cf0 (9): Bad file descriptor 00:27:32.514 [2024-07-24 20:21:36.084487] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:32.514 [2024-07-24 20:21:36.084517] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:32.514 20:21:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:32.514 20:21:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:32.514 20:21:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.514 20:21:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:32.514 20:21:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:32.514 20:21:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:32.514 20:21:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:32.514 20:21:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.514 20:21:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:32.514 20:21:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:32.514 20:21:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:32.514 20:21:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:32.514 20:21:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:32.514 20:21:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:32.514 20:21:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:32.514 20:21:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.514 20:21:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:32.514 20:21:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:32.514 20:21:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:32.514 20:21:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.772 20:21:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:32.772 20:21:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:33.706 20:21:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:33.706 20:21:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:33.706 20:21:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:33.706 20:21:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.706 20:21:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:33.706 20:21:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:33.706 20:21:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:33.706 20:21:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.706 20:21:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:33.706 20:21:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:34.638 [2024-07-24 20:21:38.141533] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:34.638 [2024-07-24 20:21:38.141574] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:34.638 [2024-07-24 20:21:38.141607] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:34.638 [2024-07-24 20:21:38.270047] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:34.638 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:34.638 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:34.638 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:34.638 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.638 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:34.638 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:34.638 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:34.638 [2024-07-24 20:21:38.372659] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:34.638 [2024-07-24 20:21:38.372724] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:34.638 [2024-07-24 20:21:38.372772] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:34.638 [2024-07-24 20:21:38.372813] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:34.638 [2024-07-24 20:21:38.372833] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:34.638 [2024-07-24 20:21:38.378617] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x11c47d0 was disconnected and freed. delete nvme_qpair. 00:27:34.638 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.638 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:34.638 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:34.638 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2142261 00:27:34.638 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2142261 ']' 00:27:34.638 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2142261 00:27:34.638 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:27:34.638 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:34.896 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2142261 00:27:34.896 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:34.896 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:34.896 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2142261' 00:27:34.896 killing process with pid 2142261 00:27:34.896 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2142261 00:27:34.896 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2142261 00:27:35.154 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:35.154 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:35.154 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:27:35.154 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:35.154 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:27:35.154 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:35.154 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:35.154 rmmod nvme_tcp 00:27:35.154 rmmod nvme_fabrics 00:27:35.154 rmmod nvme_keyring 00:27:35.154 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:35.154 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:27:35.154 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:27:35.154 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2142119 ']' 00:27:35.155 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2142119 00:27:35.155 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2142119 ']' 00:27:35.155 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2142119 00:27:35.155 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:27:35.155 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:35.155 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2142119 00:27:35.155 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:35.155 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:35.155 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2142119' 00:27:35.155 killing process with pid 2142119 00:27:35.155 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2142119 00:27:35.155 20:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2142119 00:27:35.721 20:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:35.721 20:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:35.721 20:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:35.721 20:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:35.721 20:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:35.721 20:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.721 20:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.721 20:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.624 20:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:37.624 00:27:37.624 real 0m18.500s 00:27:37.624 user 0m25.695s 00:27:37.624 sys 0m4.060s 00:27:37.624 20:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:37.624 20:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:37.624 ************************************ 00:27:37.624 END TEST nvmf_discovery_remove_ifc 00:27:37.624 ************************************ 00:27:37.624 20:21:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:37.624 20:21:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:37.624 20:21:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:37.624 20:21:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.624 ************************************ 00:27:37.624 START TEST nvmf_identify_kernel_target 00:27:37.624 ************************************ 00:27:37.624 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:37.624 * Looking for test storage... 00:27:37.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:37.883 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:37.883 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:37.883 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:37.883 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:37.883 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:37.883 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:37.883 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:37.883 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:37.883 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:37.883 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:37.883 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:37.883 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:37.883 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:37.883 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:37.883 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:37.884 20:21:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:41.173 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:41.173 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:41.173 Found net devices under 0000:84:00.0: cvl_0_0 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:41.173 Found net devices under 0000:84:00.1: cvl_0_1 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:41.173 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:41.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:41.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:27:41.174 00:27:41.174 --- 10.0.0.2 ping statistics --- 00:27:41.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.174 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:41.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:41.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:27:41.174 00:27:41.174 --- 10.0.0.1 ping statistics --- 00:27:41.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.174 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:41.174 20:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:42.559 Waiting for block devices as requested 00:27:42.559 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:27:42.559 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:42.849 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:42.849 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:42.849 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:43.118 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:43.118 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:43.118 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:43.118 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:43.376 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:43.376 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:43.376 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:43.376 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:43.635 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:43.635 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:43.635 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:43.895 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:43.895 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:43.895 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:43.895 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:43.895 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:43.895 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:43.895 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:43.895 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:43.895 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:43.896 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:43.896 No valid GPT data, bailing 00:27:43.896 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:43.896 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:43.896 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:43.896 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:43.896 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:43.896 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:43.896 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:43.896 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:43.896 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:43.896 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:43.896 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:43.896 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:43.896 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:43.896 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:43.896 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:43.896 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:43.896 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:44.155 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:27:44.155 00:27:44.155 Discovery Log Number of Records 2, Generation counter 2 00:27:44.155 =====Discovery Log Entry 0====== 00:27:44.155 trtype: tcp 00:27:44.155 adrfam: ipv4 00:27:44.155 subtype: current discovery subsystem 00:27:44.155 treq: not specified, sq flow control disable supported 00:27:44.155 portid: 1 00:27:44.155 trsvcid: 4420 00:27:44.155 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:44.155 traddr: 10.0.0.1 00:27:44.155 eflags: none 00:27:44.155 sectype: none 00:27:44.155 =====Discovery Log Entry 1====== 00:27:44.155 trtype: tcp 00:27:44.155 adrfam: ipv4 00:27:44.155 subtype: nvme subsystem 00:27:44.155 treq: not specified, sq flow control disable supported 00:27:44.155 portid: 1 00:27:44.155 trsvcid: 4420 00:27:44.155 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:44.155 traddr: 10.0.0.1 00:27:44.155 eflags: none 00:27:44.155 sectype: none 00:27:44.155 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:44.155 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:44.155 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.155 ===================================================== 00:27:44.155 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:44.155 ===================================================== 00:27:44.155 Controller Capabilities/Features 00:27:44.155 ================================ 00:27:44.155 Vendor ID: 0000 00:27:44.155 Subsystem Vendor ID: 0000 00:27:44.155 Serial Number: f3f87cfcc410f6dc86fc 00:27:44.155 Model Number: Linux 00:27:44.155 Firmware Version: 6.7.0-68 00:27:44.155 Recommended Arb Burst: 0 00:27:44.155 IEEE OUI Identifier: 00 00 00 00:27:44.155 Multi-path I/O 00:27:44.155 May have multiple subsystem ports: No 00:27:44.155 May have multiple controllers: No 00:27:44.155 Associated with SR-IOV VF: No 00:27:44.155 Max Data Transfer Size: Unlimited 00:27:44.155 Max Number of Namespaces: 0 00:27:44.155 Max Number of I/O Queues: 1024 00:27:44.155 NVMe Specification Version (VS): 1.3 00:27:44.155 NVMe Specification Version (Identify): 1.3 00:27:44.155 Maximum Queue Entries: 1024 00:27:44.155 Contiguous Queues Required: No 00:27:44.155 Arbitration Mechanisms Supported 00:27:44.155 Weighted Round Robin: Not Supported 00:27:44.155 Vendor Specific: Not Supported 00:27:44.155 Reset Timeout: 7500 ms 00:27:44.155 Doorbell Stride: 4 bytes 00:27:44.155 NVM Subsystem Reset: Not Supported 00:27:44.155 Command Sets Supported 00:27:44.155 NVM Command Set: Supported 00:27:44.155 Boot Partition: Not Supported 00:27:44.155 Memory Page Size Minimum: 4096 bytes 00:27:44.155 Memory Page Size Maximum: 4096 bytes 00:27:44.155 Persistent Memory Region: Not Supported 00:27:44.155 Optional Asynchronous Events Supported 00:27:44.155 Namespace Attribute Notices: Not Supported 00:27:44.155 Firmware Activation Notices: Not Supported 00:27:44.155 ANA Change Notices: Not Supported 00:27:44.155 PLE Aggregate Log Change Notices: Not Supported 00:27:44.155 LBA Status Info Alert Notices: Not Supported 00:27:44.155 EGE Aggregate Log Change Notices: Not Supported 00:27:44.155 Normal NVM Subsystem Shutdown event: Not Supported 00:27:44.155 Zone Descriptor Change Notices: Not Supported 00:27:44.155 Discovery Log Change Notices: Supported 00:27:44.155 Controller Attributes 00:27:44.155 128-bit Host Identifier: Not Supported 00:27:44.155 Non-Operational Permissive Mode: Not Supported 00:27:44.155 NVM Sets: Not Supported 00:27:44.155 Read Recovery Levels: Not Supported 00:27:44.155 Endurance Groups: Not Supported 00:27:44.156 Predictable Latency Mode: Not Supported 00:27:44.156 Traffic Based Keep ALive: Not Supported 00:27:44.156 Namespace Granularity: Not Supported 00:27:44.156 SQ Associations: Not Supported 00:27:44.156 UUID List: Not Supported 00:27:44.156 Multi-Domain Subsystem: Not Supported 00:27:44.156 Fixed Capacity Management: Not Supported 00:27:44.156 Variable Capacity Management: Not Supported 00:27:44.156 Delete Endurance Group: Not Supported 00:27:44.156 Delete NVM Set: Not Supported 00:27:44.156 Extended LBA Formats Supported: Not Supported 00:27:44.156 Flexible Data Placement Supported: Not Supported 00:27:44.156 00:27:44.156 Controller Memory Buffer Support 00:27:44.156 ================================ 00:27:44.156 Supported: No 00:27:44.156 00:27:44.156 Persistent Memory Region Support 00:27:44.156 ================================ 00:27:44.156 Supported: No 00:27:44.156 00:27:44.156 Admin Command Set Attributes 00:27:44.156 ============================ 00:27:44.156 Security Send/Receive: Not Supported 00:27:44.156 Format NVM: Not Supported 00:27:44.156 Firmware Activate/Download: Not Supported 00:27:44.156 Namespace Management: Not Supported 00:27:44.156 Device Self-Test: Not Supported 00:27:44.156 Directives: Not Supported 00:27:44.156 NVMe-MI: Not Supported 00:27:44.156 Virtualization Management: Not Supported 00:27:44.156 Doorbell Buffer Config: Not Supported 00:27:44.156 Get LBA Status Capability: Not Supported 00:27:44.156 Command & Feature Lockdown Capability: Not Supported 00:27:44.156 Abort Command Limit: 1 00:27:44.156 Async Event Request Limit: 1 00:27:44.156 Number of Firmware Slots: N/A 00:27:44.156 Firmware Slot 1 Read-Only: N/A 00:27:44.156 Firmware Activation Without Reset: N/A 00:27:44.156 Multiple Update Detection Support: N/A 00:27:44.156 Firmware Update Granularity: No Information Provided 00:27:44.156 Per-Namespace SMART Log: No 00:27:44.156 Asymmetric Namespace Access Log Page: Not Supported 00:27:44.156 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:44.156 Command Effects Log Page: Not Supported 00:27:44.156 Get Log Page Extended Data: Supported 00:27:44.156 Telemetry Log Pages: Not Supported 00:27:44.156 Persistent Event Log Pages: Not Supported 00:27:44.156 Supported Log Pages Log Page: May Support 00:27:44.156 Commands Supported & Effects Log Page: Not Supported 00:27:44.156 Feature Identifiers & Effects Log Page:May Support 00:27:44.156 NVMe-MI Commands & Effects Log Page: May Support 00:27:44.156 Data Area 4 for Telemetry Log: Not Supported 00:27:44.156 Error Log Page Entries Supported: 1 00:27:44.156 Keep Alive: Not Supported 00:27:44.156 00:27:44.156 NVM Command Set Attributes 00:27:44.156 ========================== 00:27:44.156 Submission Queue Entry Size 00:27:44.156 Max: 1 00:27:44.156 Min: 1 00:27:44.156 Completion Queue Entry Size 00:27:44.156 Max: 1 00:27:44.156 Min: 1 00:27:44.156 Number of Namespaces: 0 00:27:44.156 Compare Command: Not Supported 00:27:44.156 Write Uncorrectable Command: Not Supported 00:27:44.156 Dataset Management Command: Not Supported 00:27:44.156 Write Zeroes Command: Not Supported 00:27:44.156 Set Features Save Field: Not Supported 00:27:44.156 Reservations: Not Supported 00:27:44.156 Timestamp: Not Supported 00:27:44.156 Copy: Not Supported 00:27:44.156 Volatile Write Cache: Not Present 00:27:44.156 Atomic Write Unit (Normal): 1 00:27:44.156 Atomic Write Unit (PFail): 1 00:27:44.156 Atomic Compare & Write Unit: 1 00:27:44.156 Fused Compare & Write: Not Supported 00:27:44.156 Scatter-Gather List 00:27:44.156 SGL Command Set: Supported 00:27:44.156 SGL Keyed: Not Supported 00:27:44.156 SGL Bit Bucket Descriptor: Not Supported 00:27:44.156 SGL Metadata Pointer: Not Supported 00:27:44.156 Oversized SGL: Not Supported 00:27:44.156 SGL Metadata Address: Not Supported 00:27:44.156 SGL Offset: Supported 00:27:44.156 Transport SGL Data Block: Not Supported 00:27:44.156 Replay Protected Memory Block: Not Supported 00:27:44.156 00:27:44.156 Firmware Slot Information 00:27:44.156 ========================= 00:27:44.156 Active slot: 0 00:27:44.156 00:27:44.156 00:27:44.156 Error Log 00:27:44.156 ========= 00:27:44.156 00:27:44.156 Active Namespaces 00:27:44.156 ================= 00:27:44.156 Discovery Log Page 00:27:44.156 ================== 00:27:44.156 Generation Counter: 2 00:27:44.156 Number of Records: 2 00:27:44.156 Record Format: 0 00:27:44.156 00:27:44.156 Discovery Log Entry 0 00:27:44.156 ---------------------- 00:27:44.156 Transport Type: 3 (TCP) 00:27:44.156 Address Family: 1 (IPv4) 00:27:44.156 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:44.156 Entry Flags: 00:27:44.156 Duplicate Returned Information: 0 00:27:44.156 Explicit Persistent Connection Support for Discovery: 0 00:27:44.156 Transport Requirements: 00:27:44.156 Secure Channel: Not Specified 00:27:44.156 Port ID: 1 (0x0001) 00:27:44.156 Controller ID: 65535 (0xffff) 00:27:44.156 Admin Max SQ Size: 32 00:27:44.156 Transport Service Identifier: 4420 00:27:44.156 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:44.156 Transport Address: 10.0.0.1 00:27:44.156 Discovery Log Entry 1 00:27:44.156 ---------------------- 00:27:44.156 Transport Type: 3 (TCP) 00:27:44.156 Address Family: 1 (IPv4) 00:27:44.156 Subsystem Type: 2 (NVM Subsystem) 00:27:44.156 Entry Flags: 00:27:44.156 Duplicate Returned Information: 0 00:27:44.156 Explicit Persistent Connection Support for Discovery: 0 00:27:44.156 Transport Requirements: 00:27:44.156 Secure Channel: Not Specified 00:27:44.156 Port ID: 1 (0x0001) 00:27:44.156 Controller ID: 65535 (0xffff) 00:27:44.156 Admin Max SQ Size: 32 00:27:44.156 Transport Service Identifier: 4420 00:27:44.156 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:44.156 Transport Address: 10.0.0.1 00:27:44.156 20:21:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:44.417 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.417 get_feature(0x01) failed 00:27:44.417 get_feature(0x02) failed 00:27:44.417 get_feature(0x04) failed 00:27:44.417 ===================================================== 00:27:44.417 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:44.417 ===================================================== 00:27:44.417 Controller Capabilities/Features 00:27:44.417 ================================ 00:27:44.417 Vendor ID: 0000 00:27:44.417 Subsystem Vendor ID: 0000 00:27:44.417 Serial Number: 234b81de53b64ef32a06 00:27:44.417 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:44.417 Firmware Version: 6.7.0-68 00:27:44.417 Recommended Arb Burst: 6 00:27:44.417 IEEE OUI Identifier: 00 00 00 00:27:44.417 Multi-path I/O 00:27:44.417 May have multiple subsystem ports: Yes 00:27:44.417 May have multiple controllers: Yes 00:27:44.417 Associated with SR-IOV VF: No 00:27:44.417 Max Data Transfer Size: Unlimited 00:27:44.417 Max Number of Namespaces: 1024 00:27:44.417 Max Number of I/O Queues: 128 00:27:44.417 NVMe Specification Version (VS): 1.3 00:27:44.417 NVMe Specification Version (Identify): 1.3 00:27:44.417 Maximum Queue Entries: 1024 00:27:44.417 Contiguous Queues Required: No 00:27:44.417 Arbitration Mechanisms Supported 00:27:44.417 Weighted Round Robin: Not Supported 00:27:44.417 Vendor Specific: Not Supported 00:27:44.417 Reset Timeout: 7500 ms 00:27:44.417 Doorbell Stride: 4 bytes 00:27:44.417 NVM Subsystem Reset: Not Supported 00:27:44.417 Command Sets Supported 00:27:44.417 NVM Command Set: Supported 00:27:44.417 Boot Partition: Not Supported 00:27:44.417 Memory Page Size Minimum: 4096 bytes 00:27:44.417 Memory Page Size Maximum: 4096 bytes 00:27:44.417 Persistent Memory Region: Not Supported 00:27:44.417 Optional Asynchronous Events Supported 00:27:44.417 Namespace Attribute Notices: Supported 00:27:44.417 Firmware Activation Notices: Not Supported 00:27:44.417 ANA Change Notices: Supported 00:27:44.417 PLE Aggregate Log Change Notices: Not Supported 00:27:44.417 LBA Status Info Alert Notices: Not Supported 00:27:44.417 EGE Aggregate Log Change Notices: Not Supported 00:27:44.417 Normal NVM Subsystem Shutdown event: Not Supported 00:27:44.417 Zone Descriptor Change Notices: Not Supported 00:27:44.417 Discovery Log Change Notices: Not Supported 00:27:44.417 Controller Attributes 00:27:44.417 128-bit Host Identifier: Supported 00:27:44.417 Non-Operational Permissive Mode: Not Supported 00:27:44.417 NVM Sets: Not Supported 00:27:44.417 Read Recovery Levels: Not Supported 00:27:44.417 Endurance Groups: Not Supported 00:27:44.417 Predictable Latency Mode: Not Supported 00:27:44.417 Traffic Based Keep ALive: Supported 00:27:44.417 Namespace Granularity: Not Supported 00:27:44.417 SQ Associations: Not Supported 00:27:44.417 UUID List: Not Supported 00:27:44.417 Multi-Domain Subsystem: Not Supported 00:27:44.417 Fixed Capacity Management: Not Supported 00:27:44.417 Variable Capacity Management: Not Supported 00:27:44.417 Delete Endurance Group: Not Supported 00:27:44.417 Delete NVM Set: Not Supported 00:27:44.417 Extended LBA Formats Supported: Not Supported 00:27:44.417 Flexible Data Placement Supported: Not Supported 00:27:44.417 00:27:44.417 Controller Memory Buffer Support 00:27:44.417 ================================ 00:27:44.417 Supported: No 00:27:44.417 00:27:44.417 Persistent Memory Region Support 00:27:44.417 ================================ 00:27:44.417 Supported: No 00:27:44.417 00:27:44.417 Admin Command Set Attributes 00:27:44.417 ============================ 00:27:44.417 Security Send/Receive: Not Supported 00:27:44.417 Format NVM: Not Supported 00:27:44.417 Firmware Activate/Download: Not Supported 00:27:44.417 Namespace Management: Not Supported 00:27:44.417 Device Self-Test: Not Supported 00:27:44.417 Directives: Not Supported 00:27:44.417 NVMe-MI: Not Supported 00:27:44.417 Virtualization Management: Not Supported 00:27:44.417 Doorbell Buffer Config: Not Supported 00:27:44.417 Get LBA Status Capability: Not Supported 00:27:44.417 Command & Feature Lockdown Capability: Not Supported 00:27:44.417 Abort Command Limit: 4 00:27:44.417 Async Event Request Limit: 4 00:27:44.417 Number of Firmware Slots: N/A 00:27:44.417 Firmware Slot 1 Read-Only: N/A 00:27:44.417 Firmware Activation Without Reset: N/A 00:27:44.417 Multiple Update Detection Support: N/A 00:27:44.417 Firmware Update Granularity: No Information Provided 00:27:44.417 Per-Namespace SMART Log: Yes 00:27:44.417 Asymmetric Namespace Access Log Page: Supported 00:27:44.417 ANA Transition Time : 10 sec 00:27:44.417 00:27:44.417 Asymmetric Namespace Access Capabilities 00:27:44.417 ANA Optimized State : Supported 00:27:44.417 ANA Non-Optimized State : Supported 00:27:44.417 ANA Inaccessible State : Supported 00:27:44.417 ANA Persistent Loss State : Supported 00:27:44.417 ANA Change State : Supported 00:27:44.417 ANAGRPID is not changed : No 00:27:44.417 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:44.417 00:27:44.417 ANA Group Identifier Maximum : 128 00:27:44.417 Number of ANA Group Identifiers : 128 00:27:44.417 Max Number of Allowed Namespaces : 1024 00:27:44.417 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:44.417 Command Effects Log Page: Supported 00:27:44.417 Get Log Page Extended Data: Supported 00:27:44.417 Telemetry Log Pages: Not Supported 00:27:44.417 Persistent Event Log Pages: Not Supported 00:27:44.417 Supported Log Pages Log Page: May Support 00:27:44.417 Commands Supported & Effects Log Page: Not Supported 00:27:44.417 Feature Identifiers & Effects Log Page:May Support 00:27:44.417 NVMe-MI Commands & Effects Log Page: May Support 00:27:44.417 Data Area 4 for Telemetry Log: Not Supported 00:27:44.417 Error Log Page Entries Supported: 128 00:27:44.417 Keep Alive: Supported 00:27:44.418 Keep Alive Granularity: 1000 ms 00:27:44.418 00:27:44.418 NVM Command Set Attributes 00:27:44.418 ========================== 00:27:44.418 Submission Queue Entry Size 00:27:44.418 Max: 64 00:27:44.418 Min: 64 00:27:44.418 Completion Queue Entry Size 00:27:44.418 Max: 16 00:27:44.418 Min: 16 00:27:44.418 Number of Namespaces: 1024 00:27:44.418 Compare Command: Not Supported 00:27:44.418 Write Uncorrectable Command: Not Supported 00:27:44.418 Dataset Management Command: Supported 00:27:44.418 Write Zeroes Command: Supported 00:27:44.418 Set Features Save Field: Not Supported 00:27:44.418 Reservations: Not Supported 00:27:44.418 Timestamp: Not Supported 00:27:44.418 Copy: Not Supported 00:27:44.418 Volatile Write Cache: Present 00:27:44.418 Atomic Write Unit (Normal): 1 00:27:44.418 Atomic Write Unit (PFail): 1 00:27:44.418 Atomic Compare & Write Unit: 1 00:27:44.418 Fused Compare & Write: Not Supported 00:27:44.418 Scatter-Gather List 00:27:44.418 SGL Command Set: Supported 00:27:44.418 SGL Keyed: Not Supported 00:27:44.418 SGL Bit Bucket Descriptor: Not Supported 00:27:44.418 SGL Metadata Pointer: Not Supported 00:27:44.418 Oversized SGL: Not Supported 00:27:44.418 SGL Metadata Address: Not Supported 00:27:44.418 SGL Offset: Supported 00:27:44.418 Transport SGL Data Block: Not Supported 00:27:44.418 Replay Protected Memory Block: Not Supported 00:27:44.418 00:27:44.418 Firmware Slot Information 00:27:44.418 ========================= 00:27:44.418 Active slot: 0 00:27:44.418 00:27:44.418 Asymmetric Namespace Access 00:27:44.418 =========================== 00:27:44.418 Change Count : 0 00:27:44.418 Number of ANA Group Descriptors : 1 00:27:44.418 ANA Group Descriptor : 0 00:27:44.418 ANA Group ID : 1 00:27:44.418 Number of NSID Values : 1 00:27:44.418 Change Count : 0 00:27:44.418 ANA State : 1 00:27:44.418 Namespace Identifier : 1 00:27:44.418 00:27:44.418 Commands Supported and Effects 00:27:44.418 ============================== 00:27:44.418 Admin Commands 00:27:44.418 -------------- 00:27:44.418 Get Log Page (02h): Supported 00:27:44.418 Identify (06h): Supported 00:27:44.418 Abort (08h): Supported 00:27:44.418 Set Features (09h): Supported 00:27:44.418 Get Features (0Ah): Supported 00:27:44.418 Asynchronous Event Request (0Ch): Supported 00:27:44.418 Keep Alive (18h): Supported 00:27:44.418 I/O Commands 00:27:44.418 ------------ 00:27:44.418 Flush (00h): Supported 00:27:44.418 Write (01h): Supported LBA-Change 00:27:44.418 Read (02h): Supported 00:27:44.418 Write Zeroes (08h): Supported LBA-Change 00:27:44.418 Dataset Management (09h): Supported 00:27:44.418 00:27:44.418 Error Log 00:27:44.418 ========= 00:27:44.418 Entry: 0 00:27:44.418 Error Count: 0x3 00:27:44.418 Submission Queue Id: 0x0 00:27:44.418 Command Id: 0x5 00:27:44.418 Phase Bit: 0 00:27:44.418 Status Code: 0x2 00:27:44.418 Status Code Type: 0x0 00:27:44.418 Do Not Retry: 1 00:27:44.418 Error Location: 0x28 00:27:44.418 LBA: 0x0 00:27:44.418 Namespace: 0x0 00:27:44.418 Vendor Log Page: 0x0 00:27:44.418 ----------- 00:27:44.418 Entry: 1 00:27:44.418 Error Count: 0x2 00:27:44.418 Submission Queue Id: 0x0 00:27:44.418 Command Id: 0x5 00:27:44.418 Phase Bit: 0 00:27:44.418 Status Code: 0x2 00:27:44.418 Status Code Type: 0x0 00:27:44.418 Do Not Retry: 1 00:27:44.418 Error Location: 0x28 00:27:44.418 LBA: 0x0 00:27:44.418 Namespace: 0x0 00:27:44.418 Vendor Log Page: 0x0 00:27:44.418 ----------- 00:27:44.418 Entry: 2 00:27:44.418 Error Count: 0x1 00:27:44.418 Submission Queue Id: 0x0 00:27:44.418 Command Id: 0x4 00:27:44.418 Phase Bit: 0 00:27:44.418 Status Code: 0x2 00:27:44.418 Status Code Type: 0x0 00:27:44.418 Do Not Retry: 1 00:27:44.418 Error Location: 0x28 00:27:44.418 LBA: 0x0 00:27:44.418 Namespace: 0x0 00:27:44.418 Vendor Log Page: 0x0 00:27:44.418 00:27:44.418 Number of Queues 00:27:44.418 ================ 00:27:44.418 Number of I/O Submission Queues: 128 00:27:44.418 Number of I/O Completion Queues: 128 00:27:44.418 00:27:44.418 ZNS Specific Controller Data 00:27:44.418 ============================ 00:27:44.418 Zone Append Size Limit: 0 00:27:44.418 00:27:44.418 00:27:44.418 Active Namespaces 00:27:44.418 ================= 00:27:44.418 get_feature(0x05) failed 00:27:44.418 Namespace ID:1 00:27:44.418 Command Set Identifier: NVM (00h) 00:27:44.418 Deallocate: Supported 00:27:44.418 Deallocated/Unwritten Error: Not Supported 00:27:44.418 Deallocated Read Value: Unknown 00:27:44.418 Deallocate in Write Zeroes: Not Supported 00:27:44.418 Deallocated Guard Field: 0xFFFF 00:27:44.418 Flush: Supported 00:27:44.418 Reservation: Not Supported 00:27:44.418 Namespace Sharing Capabilities: Multiple Controllers 00:27:44.418 Size (in LBAs): 1953525168 (931GiB) 00:27:44.418 Capacity (in LBAs): 1953525168 (931GiB) 00:27:44.418 Utilization (in LBAs): 1953525168 (931GiB) 00:27:44.418 UUID: 86cd45fa-ab2a-446c-84db-eb9d5533e3dd 00:27:44.418 Thin Provisioning: Not Supported 00:27:44.418 Per-NS Atomic Units: Yes 00:27:44.418 Atomic Boundary Size (Normal): 0 00:27:44.418 Atomic Boundary Size (PFail): 0 00:27:44.418 Atomic Boundary Offset: 0 00:27:44.418 NGUID/EUI64 Never Reused: No 00:27:44.418 ANA group ID: 1 00:27:44.418 Namespace Write Protected: No 00:27:44.418 Number of LBA Formats: 1 00:27:44.418 Current LBA Format: LBA Format #00 00:27:44.418 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:44.418 00:27:44.418 20:21:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:44.418 20:21:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:44.418 20:21:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:44.418 20:21:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:44.418 20:21:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:44.418 20:21:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:44.418 20:21:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:44.418 rmmod nvme_tcp 00:27:44.418 rmmod nvme_fabrics 00:27:44.418 20:21:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:44.418 20:21:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:44.418 20:21:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:44.418 20:21:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:44.418 20:21:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:44.418 20:21:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:44.418 20:21:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:44.418 20:21:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:44.418 20:21:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:44.418 20:21:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.418 20:21:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:44.418 20:21:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.952 20:21:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:46.952 20:21:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:46.952 20:21:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:46.952 20:21:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:46.952 20:21:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:46.952 20:21:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:46.952 20:21:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:46.952 20:21:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:46.952 20:21:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:46.952 20:21:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:46.952 20:21:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:48.331 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:48.331 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:48.331 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:48.331 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:48.331 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:48.331 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:48.331 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:48.331 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:48.331 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:48.331 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:48.331 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:48.331 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:48.331 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:48.591 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:48.591 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:48.591 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:49.527 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:27:49.527 00:27:49.527 real 0m11.824s 00:27:49.527 user 0m2.552s 00:27:49.527 sys 0m5.167s 00:27:49.527 20:21:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:49.527 20:21:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:49.527 ************************************ 00:27:49.527 END TEST nvmf_identify_kernel_target 00:27:49.527 ************************************ 00:27:49.527 20:21:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:49.527 20:21:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:49.527 20:21:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:49.527 20:21:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.527 ************************************ 00:27:49.527 START TEST nvmf_auth_host 00:27:49.527 ************************************ 00:27:49.527 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:49.527 * Looking for test storage... 00:27:49.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:49.527 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:49.527 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:49.527 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:49.527 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:49.527 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:49.527 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:49.527 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:49.528 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:49.528 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:49.528 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:49.528 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:49.528 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:49.786 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:49.786 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:49.786 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:49.786 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:49.786 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:49.786 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:49.787 20:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:52.323 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:52.323 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:52.323 Found net devices under 0000:84:00.0: cvl_0_0 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:52.323 Found net devices under 0000:84:00.1: cvl_0_1 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:52.323 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:52.324 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:52.324 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:52.324 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:52.324 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:52.324 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.324 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:52.324 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:52.324 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:52.324 20:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:52.324 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:52.324 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:52.324 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:52.324 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:52.324 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:52.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:27:52.583 00:27:52.583 --- 10.0.0.2 ping statistics --- 00:27:52.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.583 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:52.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:27:52.583 00:27:52.583 --- 10.0.0.1 ping statistics --- 00:27:52.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.583 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2149515 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2149515 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2149515 ']' 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:52.583 20:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=db588f8bbfb0facef140c4232d8cd364 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.h2t 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key db588f8bbfb0facef140c4232d8cd364 0 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 db588f8bbfb0facef140c4232d8cd364 0 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=db588f8bbfb0facef140c4232d8cd364 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.h2t 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.h2t 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.h2t 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3d55bfa3fb95540fe713440df90ae8138b60ef70b4a118319b4b5516e8d38325 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.d1C 00:27:53.959 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3d55bfa3fb95540fe713440df90ae8138b60ef70b4a118319b4b5516e8d38325 3 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3d55bfa3fb95540fe713440df90ae8138b60ef70b4a118319b4b5516e8d38325 3 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3d55bfa3fb95540fe713440df90ae8138b60ef70b4a118319b4b5516e8d38325 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.d1C 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.d1C 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.d1C 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9a35edc853cf7f7212833bffd9fb227eb53adf201e5a978a 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.nHT 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9a35edc853cf7f7212833bffd9fb227eb53adf201e5a978a 0 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9a35edc853cf7f7212833bffd9fb227eb53adf201e5a978a 0 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9a35edc853cf7f7212833bffd9fb227eb53adf201e5a978a 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.nHT 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.nHT 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.nHT 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5e01178642271b61d52012fb80c3d5beef08bd2c3837f686 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.upW 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5e01178642271b61d52012fb80c3d5beef08bd2c3837f686 2 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5e01178642271b61d52012fb80c3d5beef08bd2c3837f686 2 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5e01178642271b61d52012fb80c3d5beef08bd2c3837f686 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:53.960 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.upW 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.upW 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.upW 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=912a735e6f10f0fee781d50ce3a42846 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.SKH 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 912a735e6f10f0fee781d50ce3a42846 1 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 912a735e6f10f0fee781d50ce3a42846 1 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=912a735e6f10f0fee781d50ce3a42846 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.SKH 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.SKH 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.SKH 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=77be298f475bef23a65001843e260357 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.mQ6 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 77be298f475bef23a65001843e260357 1 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 77be298f475bef23a65001843e260357 1 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=77be298f475bef23a65001843e260357 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.mQ6 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.mQ6 00:27:54.218 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.mQ6 00:27:54.219 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:54.219 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:54.219 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:54.219 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:54.219 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:54.219 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:54.219 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:54.219 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=07744bff3070c58e98d8c10eae707658745c170c68f13029 00:27:54.219 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:54.219 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Q4X 00:27:54.219 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 07744bff3070c58e98d8c10eae707658745c170c68f13029 2 00:27:54.219 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 07744bff3070c58e98d8c10eae707658745c170c68f13029 2 00:27:54.219 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:54.219 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:54.219 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=07744bff3070c58e98d8c10eae707658745c170c68f13029 00:27:54.219 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:54.219 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:54.219 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Q4X 00:27:54.219 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Q4X 00:27:54.219 20:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Q4X 00:27:54.219 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:54.219 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f096b987b49528c6cad2386b12a0bb47 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.nhu 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f096b987b49528c6cad2386b12a0bb47 0 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f096b987b49528c6cad2386b12a0bb47 0 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f096b987b49528c6cad2386b12a0bb47 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.nhu 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.nhu 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.nhu 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2188c69d77cce02ebfcb2815db5c8e2a07923868b5b81c68e8bfb3cb7359c901 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.air 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2188c69d77cce02ebfcb2815db5c8e2a07923868b5b81c68e8bfb3cb7359c901 3 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2188c69d77cce02ebfcb2815db5c8e2a07923868b5b81c68e8bfb3cb7359c901 3 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2188c69d77cce02ebfcb2815db5c8e2a07923868b5b81c68e8bfb3cb7359c901 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.air 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.air 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.air 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2149515 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2149515 ']' 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:54.477 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.045 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:55.045 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:55.045 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:55.045 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.h2t 00:27:55.045 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.045 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.045 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.045 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.d1C ]] 00:27:55.045 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.d1C 00:27:55.045 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.045 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.045 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.045 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:55.045 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.nHT 00:27:55.045 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.045 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.045 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.045 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.upW ]] 00:27:55.045 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.upW 00:27:55.045 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.045 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.045 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.SKH 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.mQ6 ]] 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mQ6 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Q4X 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.nhu ]] 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.nhu 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.air 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:55.046 20:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:55.981 Waiting for block devices as requested 00:27:56.239 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:27:56.239 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:56.498 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:56.498 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:56.756 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:56.756 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:56.756 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:57.013 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:57.013 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:57.013 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:57.272 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:57.272 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:57.272 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:57.272 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:57.530 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:57.530 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:57.530 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:58.097 No valid GPT data, bailing 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:58.097 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:27:58.356 00:27:58.356 Discovery Log Number of Records 2, Generation counter 2 00:27:58.356 =====Discovery Log Entry 0====== 00:27:58.356 trtype: tcp 00:27:58.356 adrfam: ipv4 00:27:58.356 subtype: current discovery subsystem 00:27:58.356 treq: not specified, sq flow control disable supported 00:27:58.356 portid: 1 00:27:58.356 trsvcid: 4420 00:27:58.356 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:58.356 traddr: 10.0.0.1 00:27:58.356 eflags: none 00:27:58.356 sectype: none 00:27:58.356 =====Discovery Log Entry 1====== 00:27:58.356 trtype: tcp 00:27:58.356 adrfam: ipv4 00:27:58.356 subtype: nvme subsystem 00:27:58.356 treq: not specified, sq flow control disable supported 00:27:58.356 portid: 1 00:27:58.356 trsvcid: 4420 00:27:58.356 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:58.356 traddr: 10.0.0.1 00:27:58.356 eflags: none 00:27:58.356 sectype: none 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: ]] 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.356 20:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.356 nvme0n1 00:27:58.356 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.356 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.356 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.356 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.356 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.356 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: ]] 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.615 nvme0n1 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.615 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.873 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.873 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.873 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.873 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.873 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.873 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.873 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:58.873 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.873 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:58.873 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:58.873 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:58.873 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: ]] 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.874 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.144 nvme0n1 00:27:59.144 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.144 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.144 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: ]] 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.145 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.425 nvme0n1 00:27:59.425 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.426 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.426 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.426 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.426 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.426 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.426 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.426 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.426 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.426 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.426 20:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: ]] 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.426 nvme0n1 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.426 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.685 nvme0n1 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.685 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: ]] 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.944 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.203 nvme0n1 00:28:00.203 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.203 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.203 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.203 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.203 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: ]] 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.204 20:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.463 nvme0n1 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: ]] 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.463 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.722 nvme0n1 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: ]] 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.722 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.723 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:00.723 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.723 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.981 nvme0n1 00:28:00.981 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.981 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.981 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.981 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.981 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.981 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.241 20:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.500 nvme0n1 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: ]] 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.500 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.068 nvme0n1 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: ]] 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.068 20:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.636 nvme0n1 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: ]] 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.636 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.896 nvme0n1 00:28:02.896 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.896 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.896 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.896 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.896 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.896 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.896 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.896 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.896 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.896 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.155 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.155 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.155 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:03.155 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.155 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:03.155 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:03.155 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:03.155 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:03.155 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:03.155 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:03.155 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:03.155 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:03.155 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: ]] 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.156 20:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.414 nvme0n1 00:28:03.414 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.414 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.414 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.414 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.414 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.414 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.414 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.414 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.414 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.414 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.673 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.932 nvme0n1 00:28:03.932 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.932 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.932 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.932 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.933 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.933 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: ]] 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.191 20:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.127 nvme0n1 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: ]] 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.127 20:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.061 nvme0n1 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: ]] 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.061 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:06.062 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.062 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.062 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.062 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.062 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.062 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.062 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.062 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.062 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.062 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.062 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.062 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.062 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.062 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.062 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:06.062 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.062 20:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.996 nvme0n1 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: ]] 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:06.996 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.997 20:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.371 nvme0n1 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.371 20:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.937 nvme0n1 00:28:08.937 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.937 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.937 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.937 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.937 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.937 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: ]] 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.196 20:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.097 nvme0n1 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: ]] 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.097 20:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.474 nvme0n1 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: ]] 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.474 20:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.376 nvme0n1 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: ]] 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.376 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.377 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.377 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:14.377 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.377 20:22:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.784 nvme0n1 00:28:15.784 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.784 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.784 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.784 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.784 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.784 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.043 20:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.947 nvme0n1 00:28:17.947 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.947 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.947 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.947 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.947 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.947 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.947 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.947 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.947 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.947 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.947 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: ]] 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.948 nvme0n1 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: ]] 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.948 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.207 nvme0n1 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: ]] 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.207 20:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.466 nvme0n1 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: ]] 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.466 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.726 nvme0n1 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.726 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.985 nvme0n1 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: ]] 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.985 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.244 nvme0n1 00:28:19.244 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.244 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.244 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.244 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.244 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.244 20:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: ]] 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.244 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.504 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.504 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.504 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.504 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.504 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.504 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.504 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.504 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.504 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.504 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.504 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.504 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.504 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:19.504 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.504 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.504 nvme0n1 00:28:19.504 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.504 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.504 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.504 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.504 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.504 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: ]] 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.766 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.025 nvme0n1 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: ]] 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.025 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.026 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:20.026 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.026 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.285 nvme0n1 00:28:20.285 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.285 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.285 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.285 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.285 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.285 20:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.285 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.544 nvme0n1 00:28:20.544 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.544 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.544 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.544 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.544 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.544 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: ]] 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.803 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.062 nvme0n1 00:28:21.062 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.062 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.062 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.062 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.062 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.320 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.320 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.320 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.320 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.320 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.320 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.320 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.320 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:21.320 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.320 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:21.320 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:21.320 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:21.320 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:21.320 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:21.320 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:21.320 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:21.320 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:21.320 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: ]] 00:28:21.320 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:21.320 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:21.320 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.320 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:21.320 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:21.321 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:21.321 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.321 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:21.321 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.321 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.321 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.321 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.321 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.321 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.321 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.321 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.321 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.321 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.321 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.321 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.321 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.321 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.321 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:21.321 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.321 20:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.579 nvme0n1 00:28:21.579 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.579 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.579 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.579 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.579 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: ]] 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.839 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.098 nvme0n1 00:28:22.098 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.098 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.098 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.098 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.098 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.098 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: ]] 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.358 20:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.926 nvme0n1 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.926 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.185 nvme0n1 00:28:23.185 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.185 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.185 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.185 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.185 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.185 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.185 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.185 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.185 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.185 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: ]] 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.444 20:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.380 nvme0n1 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: ]] 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.380 20:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.315 nvme0n1 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: ]] 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.315 20:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.251 nvme0n1 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: ]] 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.251 20:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.186 nvme0n1 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.186 20:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.122 nvme0n1 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: ]] 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.122 20:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.026 nvme0n1 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: ]] 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.026 20:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.421 nvme0n1 00:28:31.421 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.421 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.421 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.421 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.421 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.421 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: ]] 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.680 20:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.584 nvme0n1 00:28:33.584 20:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.584 20:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.584 20:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.584 20:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.584 20:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.584 20:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.584 20:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.584 20:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.584 20:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.584 20:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: ]] 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.584 20:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.961 nvme0n1 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.961 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.220 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.220 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:35.220 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.220 20:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.122 nvme0n1 00:28:37.122 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.122 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.122 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.122 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.122 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.122 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.122 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.122 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.122 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.122 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.122 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.122 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:37.122 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:37.122 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.122 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:37.122 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.122 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:37.122 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.122 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:37.122 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:37.122 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:37.122 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:37.122 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.122 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: ]] 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.123 nvme0n1 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: ]] 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.123 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.382 nvme0n1 00:28:37.382 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.382 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.382 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.382 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.382 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.382 20:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: ]] 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.382 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.642 nvme0n1 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: ]] 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.642 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.901 nvme0n1 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.901 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.160 nvme0n1 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: ]] 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:38.160 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.161 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.161 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:38.161 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.161 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:38.161 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:38.161 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:38.161 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:38.161 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.161 20:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.419 nvme0n1 00:28:38.419 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.419 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.419 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.419 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.419 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.419 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.677 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.677 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.677 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.677 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.677 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.677 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.677 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:38.677 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.677 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:38.677 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:38.677 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:38.677 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:38.677 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:38.677 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:38.677 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:38.677 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:38.677 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: ]] 00:28:38.677 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:38.677 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:38.677 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.677 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:38.678 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:38.678 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:38.678 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.678 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:38.678 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.678 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.678 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.678 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.678 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:38.678 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:38.678 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:38.678 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.678 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.678 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:38.678 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.678 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:38.678 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:38.678 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:38.678 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:38.678 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.678 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.936 nvme0n1 00:28:38.936 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.936 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: ]] 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.937 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.196 nvme0n1 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: ]] 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.196 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.454 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.454 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.454 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.454 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.454 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.454 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.454 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.454 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.454 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.454 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.454 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.454 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.454 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:39.454 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.454 20:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.454 nvme0n1 00:28:39.454 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.454 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.454 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.454 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.454 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.454 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.713 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.972 nvme0n1 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: ]] 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.972 20:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.538 nvme0n1 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: ]] 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.538 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:40.539 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:40.539 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:40.539 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.539 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:40.539 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.539 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.539 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.539 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.539 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.539 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.539 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.539 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.539 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.539 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.539 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.539 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.539 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.539 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.539 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:40.539 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.539 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.797 nvme0n1 00:28:40.797 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.797 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.797 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.797 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.797 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.797 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.055 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.055 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.055 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.055 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.055 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.055 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.055 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:41.055 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.055 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:41.055 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:41.055 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:41.055 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:41.055 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:41.055 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:41.055 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:41.055 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:41.055 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: ]] 00:28:41.055 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:41.055 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:41.055 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.055 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:41.056 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:41.056 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:41.056 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.056 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:41.056 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.056 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.056 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.056 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.056 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:41.056 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:41.056 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:41.056 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.056 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.056 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:41.056 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.056 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:41.056 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:41.056 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:41.056 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:41.056 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.056 20:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.314 nvme0n1 00:28:41.314 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.314 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.314 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.314 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.314 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.314 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: ]] 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.573 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.140 nvme0n1 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:42.140 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.141 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.141 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:42.141 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.141 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:42.141 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:42.141 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:42.141 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:42.141 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.141 20:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.399 nvme0n1 00:28:42.399 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.399 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.399 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.399 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.399 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.399 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: ]] 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.657 20:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.592 nvme0n1 00:28:43.592 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.592 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.592 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.592 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.592 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: ]] 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.593 20:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.528 nvme0n1 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: ]] 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.528 20:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.463 nvme0n1 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: ]] 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.463 20:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.399 nvme0n1 00:28:46.399 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.399 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.399 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.399 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.399 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.399 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.399 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.399 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.399 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.399 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.657 20:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.640 nvme0n1 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI1ODhmOGJiZmIwZmFjZWYxNDBjNDIzMmQ4Y2QzNjQ576+d: 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: ]] 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1NWJmYTNmYjk1NTQwZmU3MTM0NDBkZjkwYWU4MTM4YjYwZWY3MGI0YTExODMxOWI0YjU1MTZlOGQzODMyNdKgJWM=: 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.640 20:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.545 nvme0n1 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: ]] 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.545 20:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.545 20:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.545 20:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.545 20:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:49.545 20:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:49.545 20:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:49.545 20:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.545 20:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.545 20:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:49.545 20:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.545 20:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:49.545 20:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:49.545 20:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:49.545 20:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:49.545 20:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.545 20:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.919 nvme0n1 00:28:50.919 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.919 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.919 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.919 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.919 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.919 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTEyYTczNWU2ZjEwZjBmZWU3ODFkNTBjZTNhNDI4NDbcrL9e: 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: ]] 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzdiZTI5OGY0NzViZWYyM2E2NTAwMTg0M2UyNjAzNTeMK3hh: 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:51.178 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:51.179 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.179 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.179 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:51.179 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.179 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:51.179 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:51.179 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:51.179 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:51.179 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.179 20:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.084 nvme0n1 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc3NDRiZmYzMDcwYzU4ZTk4ZDhjMTBlYWU3MDc2NTg3NDVjMTcwYzY4ZjEzMDI5UOPLlw==: 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: ]] 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA5NmI5ODdiNDk1MjhjNmNhZDIzODZiMTJhMGJiNDebMp2T: 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.084 20:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.988 nvme0n1 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE4OGM2OWQ3N2NjZTAyZWJmY2IyODE1ZGI1YzhlMmEwNzkyMzg2OGI1YjgxYzY4ZThiZmIzY2I3MzU5YzkwMfcu8ew=: 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.988 20:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.364 nvme0n1 00:28:56.364 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.364 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.364 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.364 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.364 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.364 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEzNWVkYzg1M2NmN2Y3MjEyODMzYmZmZDlmYjIyN2ViNTNhZGYyMDFlNWE5NzhhJy1yXg==: 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: ]] 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUwMTE3ODY0MjI3MWI2MWQ1MjAxMmZiODBjM2Q1YmVlZjA4YmQyYzM4MzdmNjg2reCsSQ==: 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.623 request: 00:28:56.623 { 00:28:56.623 "name": "nvme0", 00:28:56.623 "trtype": "tcp", 00:28:56.623 "traddr": "10.0.0.1", 00:28:56.623 "adrfam": "ipv4", 00:28:56.623 "trsvcid": "4420", 00:28:56.623 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:56.623 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:56.623 "prchk_reftag": false, 00:28:56.623 "prchk_guard": false, 00:28:56.623 "hdgst": false, 00:28:56.623 "ddgst": false, 00:28:56.623 "method": "bdev_nvme_attach_controller", 00:28:56.623 "req_id": 1 00:28:56.623 } 00:28:56.623 Got JSON-RPC error response 00:28:56.623 response: 00:28:56.623 { 00:28:56.623 "code": -5, 00:28:56.623 "message": "Input/output error" 00:28:56.623 } 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.623 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.882 request: 00:28:56.882 { 00:28:56.882 "name": "nvme0", 00:28:56.882 "trtype": "tcp", 00:28:56.882 "traddr": "10.0.0.1", 00:28:56.882 "adrfam": "ipv4", 00:28:56.882 "trsvcid": "4420", 00:28:56.882 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:56.882 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:56.882 "prchk_reftag": false, 00:28:56.882 "prchk_guard": false, 00:28:56.882 "hdgst": false, 00:28:56.882 "ddgst": false, 00:28:56.882 "dhchap_key": "key2", 00:28:56.882 "method": "bdev_nvme_attach_controller", 00:28:56.882 "req_id": 1 00:28:56.882 } 00:28:56.882 Got JSON-RPC error response 00:28:56.882 response: 00:28:56.882 { 00:28:56.882 "code": -5, 00:28:56.882 "message": "Input/output error" 00:28:56.882 } 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:56.882 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.883 request: 00:28:56.883 { 00:28:56.883 "name": "nvme0", 00:28:56.883 "trtype": "tcp", 00:28:56.883 "traddr": "10.0.0.1", 00:28:56.883 "adrfam": "ipv4", 00:28:56.883 "trsvcid": "4420", 00:28:56.883 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:56.883 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:56.883 "prchk_reftag": false, 00:28:56.883 "prchk_guard": false, 00:28:56.883 "hdgst": false, 00:28:56.883 "ddgst": false, 00:28:56.883 "dhchap_key": "key1", 00:28:56.883 "dhchap_ctrlr_key": "ckey2", 00:28:56.883 "method": "bdev_nvme_attach_controller", 00:28:56.883 "req_id": 1 00:28:56.883 } 00:28:56.883 Got JSON-RPC error response 00:28:56.883 response: 00:28:56.883 { 00:28:56.883 "code": -5, 00:28:56.883 "message": "Input/output error" 00:28:56.883 } 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:56.883 rmmod nvme_tcp 00:28:56.883 rmmod nvme_fabrics 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2149515 ']' 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2149515 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 2149515 ']' 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 2149515 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2149515 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2149515' 00:28:56.883 killing process with pid 2149515 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 2149515 00:28:56.883 20:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 2149515 00:28:57.451 20:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:57.451 20:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:57.451 20:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:57.451 20:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:57.451 20:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:57.451 20:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.451 20:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.452 20:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.355 20:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:59.355 20:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:59.355 20:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:59.355 20:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:59.355 20:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:59.355 20:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:59.355 20:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:59.355 20:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:59.355 20:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:59.355 20:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:59.355 20:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:59.355 20:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:59.612 20:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:01.512 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:01.512 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:01.512 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:01.512 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:01.512 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:01.512 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:01.512 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:01.512 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:01.512 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:01.512 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:01.512 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:01.512 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:01.512 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:01.512 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:01.512 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:01.512 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:02.446 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:29:02.447 20:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.h2t /tmp/spdk.key-null.nHT /tmp/spdk.key-sha256.SKH /tmp/spdk.key-sha384.Q4X /tmp/spdk.key-sha512.air /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:02.447 20:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:03.821 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:29:03.821 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:03.821 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:29:03.821 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:29:03.821 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:29:03.821 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:29:03.821 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:29:03.821 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:29:03.821 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:29:03.821 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:29:03.821 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:29:03.821 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:29:03.821 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:29:03.821 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:29:03.821 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:29:03.821 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:29:03.821 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:29:03.821 00:29:03.821 real 1m14.356s 00:29:03.821 user 1m12.676s 00:29:03.821 sys 0m8.089s 00:29:03.821 20:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:03.821 20:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.821 ************************************ 00:29:03.821 END TEST nvmf_auth_host 00:29:03.821 ************************************ 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.079 ************************************ 00:29:04.079 START TEST nvmf_digest 00:29:04.079 ************************************ 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:04.079 * Looking for test storage... 00:29:04.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.079 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:29:04.080 20:23:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:07.377 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:07.378 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:07.378 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:07.378 Found net devices under 0000:84:00.0: cvl_0_0 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:07.378 Found net devices under 0000:84:00.1: cvl_0_1 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:07.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:29:07.378 00:29:07.378 --- 10.0.0.2 ping statistics --- 00:29:07.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.378 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:07.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:29:07.378 00:29:07.378 --- 10.0.0.1 ping statistics --- 00:29:07.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.378 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:07.378 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:07.378 ************************************ 00:29:07.379 START TEST nvmf_digest_clean 00:29:07.379 ************************************ 00:29:07.379 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:29:07.379 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:07.379 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:07.379 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:07.379 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:07.379 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:07.379 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:07.379 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:07.379 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:07.379 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2161557 00:29:07.379 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:07.379 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2161557 00:29:07.379 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2161557 ']' 00:29:07.379 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.379 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:07.379 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.379 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:07.379 20:23:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:07.379 [2024-07-24 20:23:10.725827] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:29:07.379 [2024-07-24 20:23:10.725922] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.379 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.379 [2024-07-24 20:23:10.832340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.379 [2024-07-24 20:23:11.031658] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.379 [2024-07-24 20:23:11.031794] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.379 [2024-07-24 20:23:11.031831] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.379 [2024-07-24 20:23:11.031862] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.379 [2024-07-24 20:23:11.031889] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.379 [2024-07-24 20:23:11.031954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.335 20:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:08.335 20:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:08.335 20:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:08.335 20:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:08.335 20:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:08.335 20:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.335 20:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:08.335 20:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:08.335 20:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:08.335 20:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.335 20:23:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:08.335 null0 00:29:08.335 [2024-07-24 20:23:12.001921] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:08.335 [2024-07-24 20:23:12.026363] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.335 20:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.335 20:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:08.335 20:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:08.335 20:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:08.335 20:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:08.335 20:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:08.335 20:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:08.335 20:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:08.335 20:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2161801 00:29:08.335 20:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:08.335 20:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2161801 /var/tmp/bperf.sock 00:29:08.335 20:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2161801 ']' 00:29:08.335 20:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:08.335 20:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:08.335 20:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:08.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:08.335 20:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:08.335 20:23:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:08.335 [2024-07-24 20:23:12.117767] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:29:08.335 [2024-07-24 20:23:12.117854] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2161801 ] 00:29:08.594 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.594 [2024-07-24 20:23:12.221662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.594 [2024-07-24 20:23:12.361672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.528 20:23:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:09.528 20:23:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:09.528 20:23:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:09.528 20:23:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:09.528 20:23:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:10.093 20:23:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:10.093 20:23:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:10.352 nvme0n1 00:29:10.352 20:23:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:10.352 20:23:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:10.610 Running I/O for 2 seconds... 00:29:12.512 00:29:12.512 Latency(us) 00:29:12.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.512 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:12.512 nvme0n1 : 2.01 14935.58 58.34 0.00 0.00 8556.41 4490.43 19806.44 00:29:12.512 =================================================================================================================== 00:29:12.512 Total : 14935.58 58.34 0.00 0.00 8556.41 4490.43 19806.44 00:29:12.512 0 00:29:12.512 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:12.512 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:12.512 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:12.512 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:12.512 | select(.opcode=="crc32c") 00:29:12.512 | "\(.module_name) \(.executed)"' 00:29:12.512 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:12.770 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:12.770 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:12.770 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:12.770 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:12.770 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2161801 00:29:12.770 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2161801 ']' 00:29:12.770 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2161801 00:29:12.770 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:13.028 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:13.028 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2161801 00:29:13.028 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:13.028 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:13.028 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2161801' 00:29:13.028 killing process with pid 2161801 00:29:13.028 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2161801 00:29:13.028 Received shutdown signal, test time was about 2.000000 seconds 00:29:13.028 00:29:13.028 Latency(us) 00:29:13.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.028 =================================================================================================================== 00:29:13.028 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:13.028 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2161801 00:29:13.287 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:13.287 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:13.287 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:13.287 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:13.287 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:13.287 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:13.287 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:13.287 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2162342 00:29:13.287 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2162342 /var/tmp/bperf.sock 00:29:13.287 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:13.287 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2162342 ']' 00:29:13.287 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:13.287 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:13.287 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:13.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:13.287 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:13.287 20:23:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:13.287 [2024-07-24 20:23:16.961581] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:29:13.287 [2024-07-24 20:23:16.961672] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162342 ] 00:29:13.287 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:13.287 Zero copy mechanism will not be used. 00:29:13.287 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.287 [2024-07-24 20:23:17.038980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.545 [2024-07-24 20:23:17.180769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.803 20:23:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:13.803 20:23:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:13.803 20:23:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:13.803 20:23:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:13.803 20:23:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:14.370 20:23:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:14.370 20:23:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:14.935 nvme0n1 00:29:14.935 20:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:14.935 20:23:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:14.935 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:14.935 Zero copy mechanism will not be used. 00:29:14.935 Running I/O for 2 seconds... 00:29:17.465 00:29:17.465 Latency(us) 00:29:17.465 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.465 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:17.465 nvme0n1 : 2.00 3221.20 402.65 0.00 0.00 4961.33 1207.56 11408.12 00:29:17.465 =================================================================================================================== 00:29:17.465 Total : 3221.20 402.65 0.00 0.00 4961.33 1207.56 11408.12 00:29:17.465 0 00:29:17.465 20:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:17.465 20:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:17.465 20:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:17.465 20:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:17.465 20:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:17.465 | select(.opcode=="crc32c") 00:29:17.465 | "\(.module_name) \(.executed)"' 00:29:17.465 20:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:17.465 20:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:17.465 20:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:17.465 20:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:17.465 20:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2162342 00:29:17.465 20:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2162342 ']' 00:29:17.465 20:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2162342 00:29:17.465 20:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:17.465 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:17.465 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2162342 00:29:17.465 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:17.465 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:17.465 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2162342' 00:29:17.465 killing process with pid 2162342 00:29:17.465 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2162342 00:29:17.465 Received shutdown signal, test time was about 2.000000 seconds 00:29:17.465 00:29:17.465 Latency(us) 00:29:17.465 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.465 =================================================================================================================== 00:29:17.465 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:17.465 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2162342 00:29:17.723 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:17.723 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:17.723 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:17.723 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:17.723 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:17.723 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:17.723 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:17.723 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2162878 00:29:17.723 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2162878 /var/tmp/bperf.sock 00:29:17.723 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:17.723 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2162878 ']' 00:29:17.723 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:17.723 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:17.723 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:17.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:17.723 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:17.723 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:17.723 [2024-07-24 20:23:21.413405] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:29:17.723 [2024-07-24 20:23:21.413537] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162878 ] 00:29:17.723 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.723 [2024-07-24 20:23:21.498179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.981 [2024-07-24 20:23:21.634367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.981 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:17.981 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:17.981 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:17.981 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:17.981 20:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:18.548 20:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:18.548 20:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:18.806 nvme0n1 00:29:18.806 20:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:18.806 20:23:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:19.064 Running I/O for 2 seconds... 00:29:20.964 00:29:20.964 Latency(us) 00:29:20.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.964 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:20.964 nvme0n1 : 2.00 16421.33 64.15 0.00 0.00 7779.59 3616.62 12379.02 00:29:20.964 =================================================================================================================== 00:29:20.964 Total : 16421.33 64.15 0.00 0.00 7779.59 3616.62 12379.02 00:29:20.964 0 00:29:20.964 20:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:20.964 20:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:20.964 20:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:20.964 20:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:20.964 20:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:20.964 | select(.opcode=="crc32c") 00:29:20.964 | "\(.module_name) \(.executed)"' 00:29:21.530 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:21.530 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:21.531 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:21.531 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:21.531 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2162878 00:29:21.531 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2162878 ']' 00:29:21.531 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2162878 00:29:21.531 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:21.531 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:21.531 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2162878 00:29:21.531 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:21.531 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:21.531 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2162878' 00:29:21.531 killing process with pid 2162878 00:29:21.531 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2162878 00:29:21.531 Received shutdown signal, test time was about 2.000000 seconds 00:29:21.531 00:29:21.531 Latency(us) 00:29:21.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.531 =================================================================================================================== 00:29:21.531 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:21.531 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2162878 00:29:22.098 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:22.098 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:22.098 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:22.098 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:22.098 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:22.098 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:22.098 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:22.098 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2163293 00:29:22.098 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:22.098 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2163293 /var/tmp/bperf.sock 00:29:22.098 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2163293 ']' 00:29:22.098 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:22.098 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:22.098 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:22.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:22.098 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:22.098 20:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:22.098 [2024-07-24 20:23:25.713803] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:29:22.098 [2024-07-24 20:23:25.713976] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2163293 ] 00:29:22.098 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:22.098 Zero copy mechanism will not be used. 00:29:22.098 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.098 [2024-07-24 20:23:25.826084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.356 [2024-07-24 20:23:25.964644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.356 20:23:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:22.356 20:23:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:22.356 20:23:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:22.356 20:23:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:22.356 20:23:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:22.922 20:23:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:22.922 20:23:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:23.488 nvme0n1 00:29:23.488 20:23:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:23.488 20:23:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:23.488 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:23.488 Zero copy mechanism will not be used. 00:29:23.488 Running I/O for 2 seconds... 00:29:25.412 00:29:25.412 Latency(us) 00:29:25.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.412 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:25.412 nvme0n1 : 2.00 3773.26 471.66 0.00 0.00 4229.13 2912.71 6941.96 00:29:25.412 =================================================================================================================== 00:29:25.412 Total : 3773.26 471.66 0.00 0.00 4229.13 2912.71 6941.96 00:29:25.412 0 00:29:25.412 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:25.412 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:25.412 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:25.412 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:25.412 | select(.opcode=="crc32c") 00:29:25.412 | "\(.module_name) \(.executed)"' 00:29:25.412 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:25.979 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:25.979 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:25.979 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:25.979 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:25.979 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2163293 00:29:25.979 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2163293 ']' 00:29:25.979 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2163293 00:29:25.979 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:25.979 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:25.979 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2163293 00:29:25.979 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:25.979 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:25.979 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2163293' 00:29:25.979 killing process with pid 2163293 00:29:25.979 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2163293 00:29:25.979 Received shutdown signal, test time was about 2.000000 seconds 00:29:25.979 00:29:25.979 Latency(us) 00:29:25.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.979 =================================================================================================================== 00:29:25.979 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:25.979 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2163293 00:29:26.237 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2161557 00:29:26.237 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2161557 ']' 00:29:26.237 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2161557 00:29:26.237 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:26.237 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:26.237 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2161557 00:29:26.237 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:26.237 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:26.237 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2161557' 00:29:26.237 killing process with pid 2161557 00:29:26.237 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2161557 00:29:26.237 20:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2161557 00:29:26.806 00:29:26.806 real 0m19.732s 00:29:26.806 user 0m39.881s 00:29:26.806 sys 0m5.180s 00:29:26.806 20:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:26.806 20:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:26.806 ************************************ 00:29:26.806 END TEST nvmf_digest_clean 00:29:26.806 ************************************ 00:29:26.806 20:23:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:26.806 20:23:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:26.806 20:23:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:26.806 20:23:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:26.806 ************************************ 00:29:26.806 START TEST nvmf_digest_error 00:29:26.806 ************************************ 00:29:26.806 20:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:29:26.806 20:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:26.806 20:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:26.806 20:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:26.806 20:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:26.806 20:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2163867 00:29:26.806 20:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:26.806 20:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2163867 00:29:26.806 20:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2163867 ']' 00:29:26.806 20:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.806 20:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:26.806 20:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.806 20:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:26.806 20:23:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:26.806 [2024-07-24 20:23:30.585898] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:29:26.806 [2024-07-24 20:23:30.586068] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:27.065 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.065 [2024-07-24 20:23:30.735689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.323 [2024-07-24 20:23:30.937846] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:27.323 [2024-07-24 20:23:30.937953] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:27.323 [2024-07-24 20:23:30.937988] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:27.323 [2024-07-24 20:23:30.938019] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:27.323 [2024-07-24 20:23:30.938045] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:27.323 [2024-07-24 20:23:30.938107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.323 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:27.323 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:27.323 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:27.323 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:27.323 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:27.323 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:27.323 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:27.323 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.323 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:27.323 [2024-07-24 20:23:31.107215] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:27.580 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.580 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:27.580 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:27.580 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.580 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:27.580 null0 00:29:27.580 [2024-07-24 20:23:31.281208] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.580 [2024-07-24 20:23:31.305563] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:27.580 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.580 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:27.580 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:27.580 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:27.580 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:27.580 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:27.580 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2164002 00:29:27.580 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:27.580 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2164002 /var/tmp/bperf.sock 00:29:27.580 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2164002 ']' 00:29:27.580 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:27.580 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:27.580 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:27.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:27.580 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:27.580 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:27.838 [2024-07-24 20:23:31.384084] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:29:27.838 [2024-07-24 20:23:31.384170] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2164002 ] 00:29:27.838 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.838 [2024-07-24 20:23:31.464639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.838 [2024-07-24 20:23:31.603054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.095 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:28.095 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:28.095 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:28.095 20:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:28.352 20:23:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:28.352 20:23:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.352 20:23:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:28.352 20:23:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.352 20:23:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:28.352 20:23:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:29.284 nvme0n1 00:29:29.284 20:23:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:29.284 20:23:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.284 20:23:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:29.284 20:23:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.284 20:23:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:29.284 20:23:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:29.284 Running I/O for 2 seconds... 00:29:29.284 [2024-07-24 20:23:33.013870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.284 [2024-07-24 20:23:33.013934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.285 [2024-07-24 20:23:33.013961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.285 [2024-07-24 20:23:33.031554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.285 [2024-07-24 20:23:33.031601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.285 [2024-07-24 20:23:33.031625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.285 [2024-07-24 20:23:33.045795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.285 [2024-07-24 20:23:33.045840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.285 [2024-07-24 20:23:33.045864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.285 [2024-07-24 20:23:33.063256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.285 [2024-07-24 20:23:33.063299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.285 [2024-07-24 20:23:33.063322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.543 [2024-07-24 20:23:33.081377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.543 [2024-07-24 20:23:33.081421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.543 [2024-07-24 20:23:33.081454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.543 [2024-07-24 20:23:33.098025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.543 [2024-07-24 20:23:33.098078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.543 [2024-07-24 20:23:33.098100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.543 [2024-07-24 20:23:33.115524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.543 [2024-07-24 20:23:33.115566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.543 [2024-07-24 20:23:33.115591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.543 [2024-07-24 20:23:33.130447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.543 [2024-07-24 20:23:33.130489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.543 [2024-07-24 20:23:33.130513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.543 [2024-07-24 20:23:33.147384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.543 [2024-07-24 20:23:33.147447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.543 [2024-07-24 20:23:33.147474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.543 [2024-07-24 20:23:33.164259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.543 [2024-07-24 20:23:33.164301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.543 [2024-07-24 20:23:33.164324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.543 [2024-07-24 20:23:33.182325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.543 [2024-07-24 20:23:33.182367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.543 [2024-07-24 20:23:33.182391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.543 [2024-07-24 20:23:33.197923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.543 [2024-07-24 20:23:33.197966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.543 [2024-07-24 20:23:33.197991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.543 [2024-07-24 20:23:33.218522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.544 [2024-07-24 20:23:33.218563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.544 [2024-07-24 20:23:33.218586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.544 [2024-07-24 20:23:33.233518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.544 [2024-07-24 20:23:33.233560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.544 [2024-07-24 20:23:33.233583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.544 [2024-07-24 20:23:33.254539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.544 [2024-07-24 20:23:33.254581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.544 [2024-07-24 20:23:33.254605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.544 [2024-07-24 20:23:33.271323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.544 [2024-07-24 20:23:33.271373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.544 [2024-07-24 20:23:33.271396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.544 [2024-07-24 20:23:33.286312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.544 [2024-07-24 20:23:33.286355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.544 [2024-07-24 20:23:33.286379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.544 [2024-07-24 20:23:33.304024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.544 [2024-07-24 20:23:33.304067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.544 [2024-07-24 20:23:33.304091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.544 [2024-07-24 20:23:33.322657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.544 [2024-07-24 20:23:33.322699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.544 [2024-07-24 20:23:33.322723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.801 [2024-07-24 20:23:33.342619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.801 [2024-07-24 20:23:33.342663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.801 [2024-07-24 20:23:33.342689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.801 [2024-07-24 20:23:33.356622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.801 [2024-07-24 20:23:33.356663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.801 [2024-07-24 20:23:33.356686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.801 [2024-07-24 20:23:33.375290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.801 [2024-07-24 20:23:33.375332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.801 [2024-07-24 20:23:33.375356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.801 [2024-07-24 20:23:33.391123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.801 [2024-07-24 20:23:33.391165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.801 [2024-07-24 20:23:33.391189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.801 [2024-07-24 20:23:33.406851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.801 [2024-07-24 20:23:33.406893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.801 [2024-07-24 20:23:33.406916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.801 [2024-07-24 20:23:33.424930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.801 [2024-07-24 20:23:33.424971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.801 [2024-07-24 20:23:33.424994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.801 [2024-07-24 20:23:33.441579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.801 [2024-07-24 20:23:33.441621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.801 [2024-07-24 20:23:33.441653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.802 [2024-07-24 20:23:33.457835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.802 [2024-07-24 20:23:33.457877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.802 [2024-07-24 20:23:33.457902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.802 [2024-07-24 20:23:33.474591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.802 [2024-07-24 20:23:33.474634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.802 [2024-07-24 20:23:33.474657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.802 [2024-07-24 20:23:33.491835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.802 [2024-07-24 20:23:33.491877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.802 [2024-07-24 20:23:33.491901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.802 [2024-07-24 20:23:33.507065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.802 [2024-07-24 20:23:33.507110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.802 [2024-07-24 20:23:33.507133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.802 [2024-07-24 20:23:33.525754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.802 [2024-07-24 20:23:33.525797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.802 [2024-07-24 20:23:33.525822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.802 [2024-07-24 20:23:33.543066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.802 [2024-07-24 20:23:33.543108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.802 [2024-07-24 20:23:33.543132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.802 [2024-07-24 20:23:33.560321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.802 [2024-07-24 20:23:33.560363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.802 [2024-07-24 20:23:33.560386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.802 [2024-07-24 20:23:33.575494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:29.802 [2024-07-24 20:23:33.575536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.802 [2024-07-24 20:23:33.575559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.060 [2024-07-24 20:23:33.592244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.060 [2024-07-24 20:23:33.592296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.060 [2024-07-24 20:23:33.592321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.060 [2024-07-24 20:23:33.610381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.060 [2024-07-24 20:23:33.610423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.060 [2024-07-24 20:23:33.610457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.060 [2024-07-24 20:23:33.625848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.060 [2024-07-24 20:23:33.625889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.060 [2024-07-24 20:23:33.625912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.060 [2024-07-24 20:23:33.642196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.060 [2024-07-24 20:23:33.642238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.060 [2024-07-24 20:23:33.642261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.060 [2024-07-24 20:23:33.659391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.060 [2024-07-24 20:23:33.659441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.060 [2024-07-24 20:23:33.659466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.060 [2024-07-24 20:23:33.676898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.060 [2024-07-24 20:23:33.676940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.060 [2024-07-24 20:23:33.676963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.060 [2024-07-24 20:23:33.692056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.060 [2024-07-24 20:23:33.692098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.060 [2024-07-24 20:23:33.692122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.060 [2024-07-24 20:23:33.712448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.060 [2024-07-24 20:23:33.712490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.060 [2024-07-24 20:23:33.712514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.060 [2024-07-24 20:23:33.730156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.060 [2024-07-24 20:23:33.730198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.060 [2024-07-24 20:23:33.730221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.060 [2024-07-24 20:23:33.746638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.060 [2024-07-24 20:23:33.746680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.060 [2024-07-24 20:23:33.746703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.060 [2024-07-24 20:23:33.764716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.060 [2024-07-24 20:23:33.764757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.060 [2024-07-24 20:23:33.764780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.060 [2024-07-24 20:23:33.782263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.060 [2024-07-24 20:23:33.782305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.060 [2024-07-24 20:23:33.782329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.060 [2024-07-24 20:23:33.799254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.060 [2024-07-24 20:23:33.799297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.060 [2024-07-24 20:23:33.799321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.060 [2024-07-24 20:23:33.814246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.060 [2024-07-24 20:23:33.814288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.060 [2024-07-24 20:23:33.814311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.060 [2024-07-24 20:23:33.834221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.060 [2024-07-24 20:23:33.834264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.060 [2024-07-24 20:23:33.834288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.319 [2024-07-24 20:23:33.850848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.319 [2024-07-24 20:23:33.850891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.319 [2024-07-24 20:23:33.850915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.319 [2024-07-24 20:23:33.870647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.319 [2024-07-24 20:23:33.870688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.319 [2024-07-24 20:23:33.870712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.319 [2024-07-24 20:23:33.892488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.319 [2024-07-24 20:23:33.892531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.319 [2024-07-24 20:23:33.892562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.319 [2024-07-24 20:23:33.912199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.319 [2024-07-24 20:23:33.912241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.319 [2024-07-24 20:23:33.912264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.319 [2024-07-24 20:23:33.928316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.319 [2024-07-24 20:23:33.928358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.319 [2024-07-24 20:23:33.928381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.319 [2024-07-24 20:23:33.950217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.319 [2024-07-24 20:23:33.950260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.319 [2024-07-24 20:23:33.950283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.319 [2024-07-24 20:23:33.971804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.319 [2024-07-24 20:23:33.971846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.319 [2024-07-24 20:23:33.971869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.319 [2024-07-24 20:23:33.988050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.319 [2024-07-24 20:23:33.988093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.319 [2024-07-24 20:23:33.988117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.319 [2024-07-24 20:23:34.005288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.319 [2024-07-24 20:23:34.005331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.319 [2024-07-24 20:23:34.005354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.319 [2024-07-24 20:23:34.021615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.319 [2024-07-24 20:23:34.021658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.319 [2024-07-24 20:23:34.021682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.319 [2024-07-24 20:23:34.038583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.319 [2024-07-24 20:23:34.038623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.319 [2024-07-24 20:23:34.038645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.319 [2024-07-24 20:23:34.055918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.319 [2024-07-24 20:23:34.055963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.319 [2024-07-24 20:23:34.055987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.319 [2024-07-24 20:23:34.072199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.319 [2024-07-24 20:23:34.072243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.319 [2024-07-24 20:23:34.072267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.319 [2024-07-24 20:23:34.087745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.319 [2024-07-24 20:23:34.087788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.319 [2024-07-24 20:23:34.087812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.577 [2024-07-24 20:23:34.106890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.577 [2024-07-24 20:23:34.106934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-07-24 20:23:34.106973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.577 [2024-07-24 20:23:34.122620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.577 [2024-07-24 20:23:34.122664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-07-24 20:23:34.122688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.577 [2024-07-24 20:23:34.142180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.577 [2024-07-24 20:23:34.142222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-07-24 20:23:34.142245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.577 [2024-07-24 20:23:34.161500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.577 [2024-07-24 20:23:34.161543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-07-24 20:23:34.161566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.577 [2024-07-24 20:23:34.177529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.577 [2024-07-24 20:23:34.177576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-07-24 20:23:34.177599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.577 [2024-07-24 20:23:34.196085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.577 [2024-07-24 20:23:34.196128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.577 [2024-07-24 20:23:34.196166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.577 [2024-07-24 20:23:34.213377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.578 [2024-07-24 20:23:34.213420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.578 [2024-07-24 20:23:34.213454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.578 [2024-07-24 20:23:34.228304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.578 [2024-07-24 20:23:34.228346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.578 [2024-07-24 20:23:34.228370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.578 [2024-07-24 20:23:34.243877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.578 [2024-07-24 20:23:34.243918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.578 [2024-07-24 20:23:34.243940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.578 [2024-07-24 20:23:34.263292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.578 [2024-07-24 20:23:34.263334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.578 [2024-07-24 20:23:34.263358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.578 [2024-07-24 20:23:34.279633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.578 [2024-07-24 20:23:34.279674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.578 [2024-07-24 20:23:34.279698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.578 [2024-07-24 20:23:34.300584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.578 [2024-07-24 20:23:34.300626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.578 [2024-07-24 20:23:34.300650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.578 [2024-07-24 20:23:34.319964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.578 [2024-07-24 20:23:34.320006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.578 [2024-07-24 20:23:34.320030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.578 [2024-07-24 20:23:34.335460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.578 [2024-07-24 20:23:34.335504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.578 [2024-07-24 20:23:34.335528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.578 [2024-07-24 20:23:34.355363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.578 [2024-07-24 20:23:34.355413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.578 [2024-07-24 20:23:34.355447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.836 [2024-07-24 20:23:34.372737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.836 [2024-07-24 20:23:34.372779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.836 [2024-07-24 20:23:34.372801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.836 [2024-07-24 20:23:34.387873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.836 [2024-07-24 20:23:34.387915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.836 [2024-07-24 20:23:34.387938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.836 [2024-07-24 20:23:34.409272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.836 [2024-07-24 20:23:34.409313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.836 [2024-07-24 20:23:34.409336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.836 [2024-07-24 20:23:34.429236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.836 [2024-07-24 20:23:34.429278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.836 [2024-07-24 20:23:34.429303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.836 [2024-07-24 20:23:34.445349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.836 [2024-07-24 20:23:34.445392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.836 [2024-07-24 20:23:34.445416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.836 [2024-07-24 20:23:34.465368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.836 [2024-07-24 20:23:34.465412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.836 [2024-07-24 20:23:34.465445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.836 [2024-07-24 20:23:34.479713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.836 [2024-07-24 20:23:34.479762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.836 [2024-07-24 20:23:34.479786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.836 [2024-07-24 20:23:34.500322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.836 [2024-07-24 20:23:34.500365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.836 [2024-07-24 20:23:34.500388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.836 [2024-07-24 20:23:34.519597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.836 [2024-07-24 20:23:34.519639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.836 [2024-07-24 20:23:34.519662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.836 [2024-07-24 20:23:34.534577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.836 [2024-07-24 20:23:34.534618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.836 [2024-07-24 20:23:34.534641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.836 [2024-07-24 20:23:34.553365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.836 [2024-07-24 20:23:34.553407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.836 [2024-07-24 20:23:34.553438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.836 [2024-07-24 20:23:34.569240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.836 [2024-07-24 20:23:34.569287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.836 [2024-07-24 20:23:34.569311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.836 [2024-07-24 20:23:34.585541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.836 [2024-07-24 20:23:34.585583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.836 [2024-07-24 20:23:34.585606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.836 [2024-07-24 20:23:34.603001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:30.836 [2024-07-24 20:23:34.603042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.836 [2024-07-24 20:23:34.603065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.094 [2024-07-24 20:23:34.623702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:31.094 [2024-07-24 20:23:34.623749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.094 [2024-07-24 20:23:34.623772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.094 [2024-07-24 20:23:34.638728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:31.094 [2024-07-24 20:23:34.638771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.094 [2024-07-24 20:23:34.638794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.094 [2024-07-24 20:23:34.658709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:31.094 [2024-07-24 20:23:34.658750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.094 [2024-07-24 20:23:34.658782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.094 [2024-07-24 20:23:34.677146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:31.094 [2024-07-24 20:23:34.677189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.094 [2024-07-24 20:23:34.677212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.094 [2024-07-24 20:23:34.692346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:31.094 [2024-07-24 20:23:34.692388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.094 [2024-07-24 20:23:34.692412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.095 [2024-07-24 20:23:34.712487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:31.095 [2024-07-24 20:23:34.712529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.095 [2024-07-24 20:23:34.712553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.095 [2024-07-24 20:23:34.734232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:31.095 [2024-07-24 20:23:34.734274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.095 [2024-07-24 20:23:34.734298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.095 [2024-07-24 20:23:34.753935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:31.095 [2024-07-24 20:23:34.753977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.095 [2024-07-24 20:23:34.754000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.095 [2024-07-24 20:23:34.768691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:31.095 [2024-07-24 20:23:34.768733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.095 [2024-07-24 20:23:34.768756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.095 [2024-07-24 20:23:34.784852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:31.095 [2024-07-24 20:23:34.784894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.095 [2024-07-24 20:23:34.784917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.095 [2024-07-24 20:23:34.803157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:31.095 [2024-07-24 20:23:34.803200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.095 [2024-07-24 20:23:34.803225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.095 [2024-07-24 20:23:34.820627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:31.095 [2024-07-24 20:23:34.820676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.095 [2024-07-24 20:23:34.820701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.095 [2024-07-24 20:23:34.835479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:31.095 [2024-07-24 20:23:34.835523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.095 [2024-07-24 20:23:34.835546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.095 [2024-07-24 20:23:34.854482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:31.095 [2024-07-24 20:23:34.854524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.095 [2024-07-24 20:23:34.854547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.095 [2024-07-24 20:23:34.874687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:31.095 [2024-07-24 20:23:34.874729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.095 [2024-07-24 20:23:34.874752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.353 [2024-07-24 20:23:34.889988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:31.353 [2024-07-24 20:23:34.890030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.353 [2024-07-24 20:23:34.890053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.353 [2024-07-24 20:23:34.909817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:31.353 [2024-07-24 20:23:34.909860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.353 [2024-07-24 20:23:34.909883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.353 [2024-07-24 20:23:34.924896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:31.353 [2024-07-24 20:23:34.924937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.353 [2024-07-24 20:23:34.924960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.353 [2024-07-24 20:23:34.943882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:31.353 [2024-07-24 20:23:34.943924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.353 [2024-07-24 20:23:34.943946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.353 [2024-07-24 20:23:34.958571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:31.353 [2024-07-24 20:23:34.958611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.353 [2024-07-24 20:23:34.958641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.353 [2024-07-24 20:23:34.976346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:31.353 [2024-07-24 20:23:34.976388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.353 [2024-07-24 20:23:34.976411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.353 [2024-07-24 20:23:34.997399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22bc530) 00:29:31.353 [2024-07-24 20:23:34.997449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.353 [2024-07-24 20:23:34.997473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.353 00:29:31.353 Latency(us) 00:29:31.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.353 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:31.353 nvme0n1 : 2.01 14498.40 56.63 0.00 0.00 8814.95 4587.52 29127.11 00:29:31.353 =================================================================================================================== 00:29:31.353 Total : 14498.40 56.63 0.00 0.00 8814.95 4587.52 29127.11 00:29:31.353 0 00:29:31.353 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:31.353 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:31.353 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:31.353 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:31.353 | .driver_specific 00:29:31.353 | .nvme_error 00:29:31.353 | .status_code 00:29:31.353 | .command_transient_transport_error' 00:29:31.918 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 114 > 0 )) 00:29:31.918 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2164002 00:29:31.918 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2164002 ']' 00:29:31.918 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2164002 00:29:31.918 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:31.918 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:31.918 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2164002 00:29:31.918 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:31.918 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:31.918 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2164002' 00:29:31.918 killing process with pid 2164002 00:29:31.918 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2164002 00:29:31.918 Received shutdown signal, test time was about 2.000000 seconds 00:29:31.918 00:29:31.918 Latency(us) 00:29:31.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.918 =================================================================================================================== 00:29:31.918 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:31.918 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2164002 00:29:32.177 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:32.177 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:32.177 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:32.177 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:32.177 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:32.177 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2164535 00:29:32.177 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:32.177 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2164535 /var/tmp/bperf.sock 00:29:32.177 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2164535 ']' 00:29:32.177 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:32.177 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:32.177 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:32.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:32.177 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:32.177 20:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:32.177 [2024-07-24 20:23:35.855492] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:29:32.177 [2024-07-24 20:23:35.855595] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2164535 ] 00:29:32.177 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:32.177 Zero copy mechanism will not be used. 00:29:32.177 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.177 [2024-07-24 20:23:35.938915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.435 [2024-07-24 20:23:36.079623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.435 20:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:32.435 20:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:32.435 20:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:32.435 20:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:33.000 20:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:33.000 20:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.000 20:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:33.000 20:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.000 20:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:33.000 20:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:33.566 nvme0n1 00:29:33.824 20:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:33.824 20:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.824 20:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:33.824 20:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.824 20:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:33.824 20:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:33.824 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:33.824 Zero copy mechanism will not be used. 00:29:33.824 Running I/O for 2 seconds... 00:29:33.824 [2024-07-24 20:23:37.522374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:33.824 [2024-07-24 20:23:37.522452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.824 [2024-07-24 20:23:37.522483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.824 [2024-07-24 20:23:37.532378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:33.824 [2024-07-24 20:23:37.532424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.824 [2024-07-24 20:23:37.532460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.824 [2024-07-24 20:23:37.543011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:33.824 [2024-07-24 20:23:37.543060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.824 [2024-07-24 20:23:37.543084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.824 [2024-07-24 20:23:37.552878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:33.824 [2024-07-24 20:23:37.552924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.824 [2024-07-24 20:23:37.552948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.824 [2024-07-24 20:23:37.563066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:33.824 [2024-07-24 20:23:37.563109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.824 [2024-07-24 20:23:37.563133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.824 [2024-07-24 20:23:37.572987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:33.824 [2024-07-24 20:23:37.573030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.824 [2024-07-24 20:23:37.573053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.824 [2024-07-24 20:23:37.582687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:33.824 [2024-07-24 20:23:37.582742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.824 [2024-07-24 20:23:37.582767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.824 [2024-07-24 20:23:37.592011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:33.824 [2024-07-24 20:23:37.592054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.824 [2024-07-24 20:23:37.592077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.824 [2024-07-24 20:23:37.602093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:33.824 [2024-07-24 20:23:37.602136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.824 [2024-07-24 20:23:37.602160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.083 [2024-07-24 20:23:37.612446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.083 [2024-07-24 20:23:37.612503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.083 [2024-07-24 20:23:37.612525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.083 [2024-07-24 20:23:37.622821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.083 [2024-07-24 20:23:37.622864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.083 [2024-07-24 20:23:37.622887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.083 [2024-07-24 20:23:37.633312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.083 [2024-07-24 20:23:37.633355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.083 [2024-07-24 20:23:37.633378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.083 [2024-07-24 20:23:37.643421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.083 [2024-07-24 20:23:37.643472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.083 [2024-07-24 20:23:37.643495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.083 [2024-07-24 20:23:37.653102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.083 [2024-07-24 20:23:37.653144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.083 [2024-07-24 20:23:37.653167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.083 [2024-07-24 20:23:37.662648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.083 [2024-07-24 20:23:37.662689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.083 [2024-07-24 20:23:37.662712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.083 [2024-07-24 20:23:37.672455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.083 [2024-07-24 20:23:37.672496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.083 [2024-07-24 20:23:37.672520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.083 [2024-07-24 20:23:37.682156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.083 [2024-07-24 20:23:37.682197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.083 [2024-07-24 20:23:37.682220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.083 [2024-07-24 20:23:37.691864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.083 [2024-07-24 20:23:37.691904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.083 [2024-07-24 20:23:37.691927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.083 [2024-07-24 20:23:37.701894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.083 [2024-07-24 20:23:37.701936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.083 [2024-07-24 20:23:37.701959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.083 [2024-07-24 20:23:37.711972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.083 [2024-07-24 20:23:37.712015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.083 [2024-07-24 20:23:37.712038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.083 [2024-07-24 20:23:37.722081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.083 [2024-07-24 20:23:37.722124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.083 [2024-07-24 20:23:37.722147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.083 [2024-07-24 20:23:37.732906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.083 [2024-07-24 20:23:37.732949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.083 [2024-07-24 20:23:37.732973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.083 [2024-07-24 20:23:37.742763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.083 [2024-07-24 20:23:37.742805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.083 [2024-07-24 20:23:37.742828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.083 [2024-07-24 20:23:37.752713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.083 [2024-07-24 20:23:37.752757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.083 [2024-07-24 20:23:37.752788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.083 [2024-07-24 20:23:37.762778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.083 [2024-07-24 20:23:37.762821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.083 [2024-07-24 20:23:37.762844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.083 [2024-07-24 20:23:37.773451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.083 [2024-07-24 20:23:37.773494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.083 [2024-07-24 20:23:37.773517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.083 [2024-07-24 20:23:37.784475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.083 [2024-07-24 20:23:37.784520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.083 [2024-07-24 20:23:37.784544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.083 [2024-07-24 20:23:37.794768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.083 [2024-07-24 20:23:37.794812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.083 [2024-07-24 20:23:37.794836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.083 [2024-07-24 20:23:37.804894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.083 [2024-07-24 20:23:37.804935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.083 [2024-07-24 20:23:37.804958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.083 [2024-07-24 20:23:37.815980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.083 [2024-07-24 20:23:37.816024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.083 [2024-07-24 20:23:37.816047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.083 [2024-07-24 20:23:37.825930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.083 [2024-07-24 20:23:37.825973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.083 [2024-07-24 20:23:37.825996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.084 [2024-07-24 20:23:37.835738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.084 [2024-07-24 20:23:37.835780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.084 [2024-07-24 20:23:37.835802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.084 [2024-07-24 20:23:37.846505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.084 [2024-07-24 20:23:37.846549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.084 [2024-07-24 20:23:37.846572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.084 [2024-07-24 20:23:37.856787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.084 [2024-07-24 20:23:37.856830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.084 [2024-07-24 20:23:37.856853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.084 [2024-07-24 20:23:37.867026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.084 [2024-07-24 20:23:37.867083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.084 [2024-07-24 20:23:37.867106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.343 [2024-07-24 20:23:37.878510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.343 [2024-07-24 20:23:37.878554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.343 [2024-07-24 20:23:37.878579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.343 [2024-07-24 20:23:37.888195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.343 [2024-07-24 20:23:37.888239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.343 [2024-07-24 20:23:37.888263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.343 [2024-07-24 20:23:37.899277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.343 [2024-07-24 20:23:37.899320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.343 [2024-07-24 20:23:37.899343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.343 [2024-07-24 20:23:37.910489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.343 [2024-07-24 20:23:37.910532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.343 [2024-07-24 20:23:37.910556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.343 [2024-07-24 20:23:37.921540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.343 [2024-07-24 20:23:37.921584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.343 [2024-07-24 20:23:37.921607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.343 [2024-07-24 20:23:37.932439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.343 [2024-07-24 20:23:37.932481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.343 [2024-07-24 20:23:37.932514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.343 [2024-07-24 20:23:37.944374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.343 [2024-07-24 20:23:37.944419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.343 [2024-07-24 20:23:37.944452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.343 [2024-07-24 20:23:37.956106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.343 [2024-07-24 20:23:37.956150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.343 [2024-07-24 20:23:37.956173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.343 [2024-07-24 20:23:37.966862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.343 [2024-07-24 20:23:37.966905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.343 [2024-07-24 20:23:37.966929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.343 [2024-07-24 20:23:37.978054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.343 [2024-07-24 20:23:37.978097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.343 [2024-07-24 20:23:37.978121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.343 [2024-07-24 20:23:37.985141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.343 [2024-07-24 20:23:37.985187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.343 [2024-07-24 20:23:37.985210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.343 [2024-07-24 20:23:37.994375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.343 [2024-07-24 20:23:37.994419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.343 [2024-07-24 20:23:37.994457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.343 [2024-07-24 20:23:38.006734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.343 [2024-07-24 20:23:38.006778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.343 [2024-07-24 20:23:38.006802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.343 [2024-07-24 20:23:38.017811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.343 [2024-07-24 20:23:38.017855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.343 [2024-07-24 20:23:38.017878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.343 [2024-07-24 20:23:38.029113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.343 [2024-07-24 20:23:38.029165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.343 [2024-07-24 20:23:38.029190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.343 [2024-07-24 20:23:38.040348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.343 [2024-07-24 20:23:38.040392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.343 [2024-07-24 20:23:38.040416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.343 [2024-07-24 20:23:38.051743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.343 [2024-07-24 20:23:38.051787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.343 [2024-07-24 20:23:38.051810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.343 [2024-07-24 20:23:38.063299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.343 [2024-07-24 20:23:38.063341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.343 [2024-07-24 20:23:38.063365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.343 [2024-07-24 20:23:38.074775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.343 [2024-07-24 20:23:38.074818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.343 [2024-07-24 20:23:38.074842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.343 [2024-07-24 20:23:38.084827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.343 [2024-07-24 20:23:38.084871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.343 [2024-07-24 20:23:38.084894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.343 [2024-07-24 20:23:38.095252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.343 [2024-07-24 20:23:38.095294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.343 [2024-07-24 20:23:38.095317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.343 [2024-07-24 20:23:38.105351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.343 [2024-07-24 20:23:38.105393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.343 [2024-07-24 20:23:38.105416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.343 [2024-07-24 20:23:38.116071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.343 [2024-07-24 20:23:38.116113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.343 [2024-07-24 20:23:38.116136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.343 [2024-07-24 20:23:38.126862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.343 [2024-07-24 20:23:38.126903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.343 [2024-07-24 20:23:38.126925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.602 [2024-07-24 20:23:38.138070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.602 [2024-07-24 20:23:38.138112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.602 [2024-07-24 20:23:38.138135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.602 [2024-07-24 20:23:38.149147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.602 [2024-07-24 20:23:38.149191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.602 [2024-07-24 20:23:38.149214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.602 [2024-07-24 20:23:38.160047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.602 [2024-07-24 20:23:38.160092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.602 [2024-07-24 20:23:38.160115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.602 [2024-07-24 20:23:38.171041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.602 [2024-07-24 20:23:38.171085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.602 [2024-07-24 20:23:38.171108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.602 [2024-07-24 20:23:38.182321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.602 [2024-07-24 20:23:38.182364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.602 [2024-07-24 20:23:38.182387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.602 [2024-07-24 20:23:38.193301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.602 [2024-07-24 20:23:38.193344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.602 [2024-07-24 20:23:38.193367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.602 [2024-07-24 20:23:38.204736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.602 [2024-07-24 20:23:38.204779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.602 [2024-07-24 20:23:38.204804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.602 [2024-07-24 20:23:38.215669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.602 [2024-07-24 20:23:38.215713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.602 [2024-07-24 20:23:38.215744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.602 [2024-07-24 20:23:38.226552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.602 [2024-07-24 20:23:38.226594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.602 [2024-07-24 20:23:38.226617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.602 [2024-07-24 20:23:38.237402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.602 [2024-07-24 20:23:38.237455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.602 [2024-07-24 20:23:38.237480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.603 [2024-07-24 20:23:38.248666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.603 [2024-07-24 20:23:38.248710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.603 [2024-07-24 20:23:38.248733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.603 [2024-07-24 20:23:38.259596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.603 [2024-07-24 20:23:38.259638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.603 [2024-07-24 20:23:38.259661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.603 [2024-07-24 20:23:38.270625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.603 [2024-07-24 20:23:38.270668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.603 [2024-07-24 20:23:38.270691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.603 [2024-07-24 20:23:38.282078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.603 [2024-07-24 20:23:38.282121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.603 [2024-07-24 20:23:38.282145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.603 [2024-07-24 20:23:38.292922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.603 [2024-07-24 20:23:38.292965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.603 [2024-07-24 20:23:38.292988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.603 [2024-07-24 20:23:38.302572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.603 [2024-07-24 20:23:38.302615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.603 [2024-07-24 20:23:38.302638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.603 [2024-07-24 20:23:38.313020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.603 [2024-07-24 20:23:38.313071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.603 [2024-07-24 20:23:38.313095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.603 [2024-07-24 20:23:38.322554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.603 [2024-07-24 20:23:38.322597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.603 [2024-07-24 20:23:38.322620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.603 [2024-07-24 20:23:38.331918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.603 [2024-07-24 20:23:38.331960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.603 [2024-07-24 20:23:38.331982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.603 [2024-07-24 20:23:38.340794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.603 [2024-07-24 20:23:38.340836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.603 [2024-07-24 20:23:38.340859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.603 [2024-07-24 20:23:38.350261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.603 [2024-07-24 20:23:38.350302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.603 [2024-07-24 20:23:38.350324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.603 [2024-07-24 20:23:38.359930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.603 [2024-07-24 20:23:38.359971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.603 [2024-07-24 20:23:38.359993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.603 [2024-07-24 20:23:38.369576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.603 [2024-07-24 20:23:38.369616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.603 [2024-07-24 20:23:38.369639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.603 [2024-07-24 20:23:38.379515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.603 [2024-07-24 20:23:38.379575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.603 [2024-07-24 20:23:38.379598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.603 [2024-07-24 20:23:38.385774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.603 [2024-07-24 20:23:38.385815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.603 [2024-07-24 20:23:38.385837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.861 [2024-07-24 20:23:38.393932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.861 [2024-07-24 20:23:38.393973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.861 [2024-07-24 20:23:38.393998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.861 [2024-07-24 20:23:38.403896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.861 [2024-07-24 20:23:38.403936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.861 [2024-07-24 20:23:38.403959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.861 [2024-07-24 20:23:38.413523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.861 [2024-07-24 20:23:38.413563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.862 [2024-07-24 20:23:38.413586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.862 [2024-07-24 20:23:38.424019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.862 [2024-07-24 20:23:38.424061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.862 [2024-07-24 20:23:38.424084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.862 [2024-07-24 20:23:38.433940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.862 [2024-07-24 20:23:38.433981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.862 [2024-07-24 20:23:38.434004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.862 [2024-07-24 20:23:38.443992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.862 [2024-07-24 20:23:38.444034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.862 [2024-07-24 20:23:38.444057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.862 [2024-07-24 20:23:38.453896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.862 [2024-07-24 20:23:38.453938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.862 [2024-07-24 20:23:38.453960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.862 [2024-07-24 20:23:38.464656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.862 [2024-07-24 20:23:38.464699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.862 [2024-07-24 20:23:38.464722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.862 [2024-07-24 20:23:38.474537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.862 [2024-07-24 20:23:38.474577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.862 [2024-07-24 20:23:38.474608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.862 [2024-07-24 20:23:38.484478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.862 [2024-07-24 20:23:38.484518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.862 [2024-07-24 20:23:38.484540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.862 [2024-07-24 20:23:38.494371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.862 [2024-07-24 20:23:38.494414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.862 [2024-07-24 20:23:38.494449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.862 [2024-07-24 20:23:38.504101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.862 [2024-07-24 20:23:38.504141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.862 [2024-07-24 20:23:38.504165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.862 [2024-07-24 20:23:38.514129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.862 [2024-07-24 20:23:38.514169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.862 [2024-07-24 20:23:38.514191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.862 [2024-07-24 20:23:38.524475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.862 [2024-07-24 20:23:38.524516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.862 [2024-07-24 20:23:38.524540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.862 [2024-07-24 20:23:38.534785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.862 [2024-07-24 20:23:38.534827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.862 [2024-07-24 20:23:38.534857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.862 [2024-07-24 20:23:38.545107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.862 [2024-07-24 20:23:38.545149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.862 [2024-07-24 20:23:38.545172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.862 [2024-07-24 20:23:38.555773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.862 [2024-07-24 20:23:38.555816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.862 [2024-07-24 20:23:38.555840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.862 [2024-07-24 20:23:38.565903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.862 [2024-07-24 20:23:38.565952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.862 [2024-07-24 20:23:38.565977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.862 [2024-07-24 20:23:38.575680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.862 [2024-07-24 20:23:38.575721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.862 [2024-07-24 20:23:38.575744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.862 [2024-07-24 20:23:38.585300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.862 [2024-07-24 20:23:38.585341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.862 [2024-07-24 20:23:38.585364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.862 [2024-07-24 20:23:38.595283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.862 [2024-07-24 20:23:38.595324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.862 [2024-07-24 20:23:38.595347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.862 [2024-07-24 20:23:38.605674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.862 [2024-07-24 20:23:38.605716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.862 [2024-07-24 20:23:38.605739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.862 [2024-07-24 20:23:38.615965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.862 [2024-07-24 20:23:38.616007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.862 [2024-07-24 20:23:38.616030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.862 [2024-07-24 20:23:38.625948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.862 [2024-07-24 20:23:38.625989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.862 [2024-07-24 20:23:38.626012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.862 [2024-07-24 20:23:38.636032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.862 [2024-07-24 20:23:38.636075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.862 [2024-07-24 20:23:38.636097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.862 [2024-07-24 20:23:38.646100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:34.862 [2024-07-24 20:23:38.646141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.862 [2024-07-24 20:23:38.646162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.121 [2024-07-24 20:23:38.656396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.121 [2024-07-24 20:23:38.656449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.121 [2024-07-24 20:23:38.656474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.121 [2024-07-24 20:23:38.666273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.121 [2024-07-24 20:23:38.666316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.121 [2024-07-24 20:23:38.666339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.121 [2024-07-24 20:23:38.675574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.121 [2024-07-24 20:23:38.675614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.121 [2024-07-24 20:23:38.675635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.121 [2024-07-24 20:23:38.685408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.121 [2024-07-24 20:23:38.685472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.121 [2024-07-24 20:23:38.685496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.121 [2024-07-24 20:23:38.694931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.121 [2024-07-24 20:23:38.694973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.121 [2024-07-24 20:23:38.694996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.121 [2024-07-24 20:23:38.704498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.121 [2024-07-24 20:23:38.704539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.121 [2024-07-24 20:23:38.704561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.121 [2024-07-24 20:23:38.713832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.121 [2024-07-24 20:23:38.713872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.121 [2024-07-24 20:23:38.713895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.121 [2024-07-24 20:23:38.723783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.121 [2024-07-24 20:23:38.723827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.121 [2024-07-24 20:23:38.723850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.121 [2024-07-24 20:23:38.733334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.121 [2024-07-24 20:23:38.733376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.121 [2024-07-24 20:23:38.733408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.121 [2024-07-24 20:23:38.742782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.121 [2024-07-24 20:23:38.742823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.121 [2024-07-24 20:23:38.742846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.121 [2024-07-24 20:23:38.753146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.121 [2024-07-24 20:23:38.753200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.121 [2024-07-24 20:23:38.753223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.121 [2024-07-24 20:23:38.763490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.121 [2024-07-24 20:23:38.763531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.121 [2024-07-24 20:23:38.763555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.121 [2024-07-24 20:23:38.772996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.121 [2024-07-24 20:23:38.773037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.121 [2024-07-24 20:23:38.773059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.121 [2024-07-24 20:23:38.782290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.121 [2024-07-24 20:23:38.782330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.121 [2024-07-24 20:23:38.782352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.121 [2024-07-24 20:23:38.791865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.121 [2024-07-24 20:23:38.791906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.121 [2024-07-24 20:23:38.791928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.121 [2024-07-24 20:23:38.801365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.121 [2024-07-24 20:23:38.801406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.121 [2024-07-24 20:23:38.801439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.121 [2024-07-24 20:23:38.810132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.121 [2024-07-24 20:23:38.810173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.121 [2024-07-24 20:23:38.810195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.122 [2024-07-24 20:23:38.819776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.122 [2024-07-24 20:23:38.819817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.122 [2024-07-24 20:23:38.819839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.122 [2024-07-24 20:23:38.829512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.122 [2024-07-24 20:23:38.829553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.122 [2024-07-24 20:23:38.829575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.122 [2024-07-24 20:23:38.839280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.122 [2024-07-24 20:23:38.839321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.122 [2024-07-24 20:23:38.839343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.122 [2024-07-24 20:23:38.850741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.122 [2024-07-24 20:23:38.850782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.122 [2024-07-24 20:23:38.850805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.122 [2024-07-24 20:23:38.866008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.122 [2024-07-24 20:23:38.866051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.122 [2024-07-24 20:23:38.866074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.122 [2024-07-24 20:23:38.881100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.122 [2024-07-24 20:23:38.881142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.122 [2024-07-24 20:23:38.881164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.122 [2024-07-24 20:23:38.895927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.122 [2024-07-24 20:23:38.895968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.122 [2024-07-24 20:23:38.895991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.380 [2024-07-24 20:23:38.911375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.380 [2024-07-24 20:23:38.911417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.380 [2024-07-24 20:23:38.911450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.380 [2024-07-24 20:23:38.926411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.380 [2024-07-24 20:23:38.926461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.380 [2024-07-24 20:23:38.926492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.380 [2024-07-24 20:23:38.941153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.380 [2024-07-24 20:23:38.941195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.380 [2024-07-24 20:23:38.941218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.380 [2024-07-24 20:23:38.956181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.380 [2024-07-24 20:23:38.956222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.380 [2024-07-24 20:23:38.956245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.380 [2024-07-24 20:23:38.971328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.380 [2024-07-24 20:23:38.971370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.380 [2024-07-24 20:23:38.971392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.380 [2024-07-24 20:23:38.986537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.380 [2024-07-24 20:23:38.986579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.380 [2024-07-24 20:23:38.986601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.380 [2024-07-24 20:23:39.001597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.380 [2024-07-24 20:23:39.001638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.380 [2024-07-24 20:23:39.001660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.380 [2024-07-24 20:23:39.016010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.380 [2024-07-24 20:23:39.016051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.380 [2024-07-24 20:23:39.016073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.380 [2024-07-24 20:23:39.031251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.381 [2024-07-24 20:23:39.031292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.381 [2024-07-24 20:23:39.031315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.381 [2024-07-24 20:23:39.046235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.381 [2024-07-24 20:23:39.046276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.381 [2024-07-24 20:23:39.046298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.381 [2024-07-24 20:23:39.061383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.381 [2024-07-24 20:23:39.061440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.381 [2024-07-24 20:23:39.061468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.381 [2024-07-24 20:23:39.077008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.381 [2024-07-24 20:23:39.077051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.381 [2024-07-24 20:23:39.077075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.381 [2024-07-24 20:23:39.092925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.381 [2024-07-24 20:23:39.092968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.381 [2024-07-24 20:23:39.092991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.381 [2024-07-24 20:23:39.107733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.381 [2024-07-24 20:23:39.107788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.381 [2024-07-24 20:23:39.107812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.381 [2024-07-24 20:23:39.122646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.381 [2024-07-24 20:23:39.122688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.381 [2024-07-24 20:23:39.122712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.381 [2024-07-24 20:23:39.137862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.381 [2024-07-24 20:23:39.137903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.381 [2024-07-24 20:23:39.137926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.381 [2024-07-24 20:23:39.152883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.381 [2024-07-24 20:23:39.152925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.381 [2024-07-24 20:23:39.152948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.639 [2024-07-24 20:23:39.167570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.639 [2024-07-24 20:23:39.167611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.639 [2024-07-24 20:23:39.167633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.639 [2024-07-24 20:23:39.182828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.639 [2024-07-24 20:23:39.182869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.639 [2024-07-24 20:23:39.182893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.639 [2024-07-24 20:23:39.197736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.639 [2024-07-24 20:23:39.197778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.639 [2024-07-24 20:23:39.197801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.639 [2024-07-24 20:23:39.212930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.639 [2024-07-24 20:23:39.212971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.639 [2024-07-24 20:23:39.212994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.639 [2024-07-24 20:23:39.228390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.639 [2024-07-24 20:23:39.228442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.639 [2024-07-24 20:23:39.228469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.639 [2024-07-24 20:23:39.243160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.640 [2024-07-24 20:23:39.243200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.640 [2024-07-24 20:23:39.243223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.640 [2024-07-24 20:23:39.258094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.640 [2024-07-24 20:23:39.258135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.640 [2024-07-24 20:23:39.258158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.640 [2024-07-24 20:23:39.273083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.640 [2024-07-24 20:23:39.273124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.640 [2024-07-24 20:23:39.273147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.640 [2024-07-24 20:23:39.288071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.640 [2024-07-24 20:23:39.288111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.640 [2024-07-24 20:23:39.288134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.640 [2024-07-24 20:23:39.302874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.640 [2024-07-24 20:23:39.302915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.640 [2024-07-24 20:23:39.302937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.640 [2024-07-24 20:23:39.317755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.640 [2024-07-24 20:23:39.317797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.640 [2024-07-24 20:23:39.317831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.640 [2024-07-24 20:23:39.332792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.640 [2024-07-24 20:23:39.332833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.640 [2024-07-24 20:23:39.332856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.640 [2024-07-24 20:23:39.347740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.640 [2024-07-24 20:23:39.347781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.640 [2024-07-24 20:23:39.347804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.640 [2024-07-24 20:23:39.362655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.640 [2024-07-24 20:23:39.362698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.640 [2024-07-24 20:23:39.362721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.640 [2024-07-24 20:23:39.378563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.640 [2024-07-24 20:23:39.378606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.640 [2024-07-24 20:23:39.378630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.640 [2024-07-24 20:23:39.394239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.640 [2024-07-24 20:23:39.394282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.640 [2024-07-24 20:23:39.394306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.640 [2024-07-24 20:23:39.409780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.640 [2024-07-24 20:23:39.409822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.640 [2024-07-24 20:23:39.409844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.898 [2024-07-24 20:23:39.425735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.898 [2024-07-24 20:23:39.425776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.898 [2024-07-24 20:23:39.425799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.898 [2024-07-24 20:23:39.441316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.898 [2024-07-24 20:23:39.441357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.898 [2024-07-24 20:23:39.441380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.898 [2024-07-24 20:23:39.456767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.898 [2024-07-24 20:23:39.456818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.898 [2024-07-24 20:23:39.456842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.898 [2024-07-24 20:23:39.472489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.898 [2024-07-24 20:23:39.472530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.898 [2024-07-24 20:23:39.472553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.898 [2024-07-24 20:23:39.488289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.898 [2024-07-24 20:23:39.488330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.898 [2024-07-24 20:23:39.488353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.898 [2024-07-24 20:23:39.502348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.898 [2024-07-24 20:23:39.502390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.898 [2024-07-24 20:23:39.502414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.898 [2024-07-24 20:23:39.516475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb32e30) 00:29:35.898 [2024-07-24 20:23:39.516516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.898 [2024-07-24 20:23:39.516539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.898 00:29:35.898 Latency(us) 00:29:35.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.898 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:35.898 nvme0n1 : 2.01 2707.29 338.41 0.00 0.00 5902.69 1213.63 16311.18 00:29:35.898 =================================================================================================================== 00:29:35.898 Total : 2707.29 338.41 0.00 0.00 5902.69 1213.63 16311.18 00:29:35.898 0 00:29:35.898 20:23:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:35.898 20:23:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:35.898 20:23:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:35.898 20:23:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:35.898 | .driver_specific 00:29:35.898 | .nvme_error 00:29:35.898 | .status_code 00:29:35.898 | .command_transient_transport_error' 00:29:36.156 20:23:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 175 > 0 )) 00:29:36.156 20:23:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2164535 00:29:36.156 20:23:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2164535 ']' 00:29:36.156 20:23:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2164535 00:29:36.156 20:23:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:36.156 20:23:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:36.156 20:23:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2164535 00:29:36.156 20:23:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:36.156 20:23:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:36.156 20:23:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2164535' 00:29:36.156 killing process with pid 2164535 00:29:36.156 20:23:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2164535 00:29:36.156 Received shutdown signal, test time was about 2.000000 seconds 00:29:36.156 00:29:36.156 Latency(us) 00:29:36.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.156 =================================================================================================================== 00:29:36.156 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:36.156 20:23:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2164535 00:29:36.723 20:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:36.723 20:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:36.723 20:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:36.723 20:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:36.723 20:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:36.723 20:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2165065 00:29:36.723 20:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:36.724 20:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2165065 /var/tmp/bperf.sock 00:29:36.724 20:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2165065 ']' 00:29:36.724 20:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:36.724 20:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:36.724 20:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:36.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:36.724 20:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:36.724 20:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:36.724 [2024-07-24 20:23:40.269145] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:29:36.724 [2024-07-24 20:23:40.269255] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2165065 ] 00:29:36.724 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.724 [2024-07-24 20:23:40.351939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.724 [2024-07-24 20:23:40.490836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.982 20:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:36.982 20:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:36.982 20:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:36.982 20:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:37.241 20:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:37.241 20:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.241 20:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:37.241 20:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.241 20:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:37.241 20:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:37.807 nvme0n1 00:29:37.807 20:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:37.807 20:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.807 20:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:38.065 20:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.065 20:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:38.065 20:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:38.065 Running I/O for 2 seconds... 00:29:38.065 [2024-07-24 20:23:41.757538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190ed920 00:29:38.065 [2024-07-24 20:23:41.759032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.065 [2024-07-24 20:23:41.759085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:38.065 [2024-07-24 20:23:41.773867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190eea00 00:29:38.065 [2024-07-24 20:23:41.775273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.065 [2024-07-24 20:23:41.775314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:38.065 [2024-07-24 20:23:41.790386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190fef90 00:29:38.065 [2024-07-24 20:23:41.791995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.065 [2024-07-24 20:23:41.792034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:38.065 [2024-07-24 20:23:41.806571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190fd208 00:29:38.065 [2024-07-24 20:23:41.808177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.065 [2024-07-24 20:23:41.808227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:38.065 [2024-07-24 20:23:41.821487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e4140 00:29:38.065 [2024-07-24 20:23:41.823089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.065 [2024-07-24 20:23:41.823127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:38.065 [2024-07-24 20:23:41.839169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190efae0 00:29:38.065 [2024-07-24 20:23:41.841000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.065 [2024-07-24 20:23:41.841039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:38.325 [2024-07-24 20:23:41.855685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190fc128 00:29:38.325 [2024-07-24 20:23:41.857623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.325 [2024-07-24 20:23:41.857663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:38.325 [2024-07-24 20:23:41.870721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190df988 00:29:38.325 [2024-07-24 20:23:41.872741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.325 [2024-07-24 20:23:41.872779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:38.325 [2024-07-24 20:23:41.887699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190ddc00 00:29:38.325 [2024-07-24 20:23:41.889906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.325 [2024-07-24 20:23:41.889945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:38.325 [2024-07-24 20:23:41.904303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f4f40 00:29:38.325 [2024-07-24 20:23:41.906737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.325 [2024-07-24 20:23:41.906776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:38.325 [2024-07-24 20:23:41.920856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190eea00 00:29:38.325 [2024-07-24 20:23:41.923495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.325 [2024-07-24 20:23:41.923534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:38.325 [2024-07-24 20:23:41.932068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e99d8 00:29:38.325 [2024-07-24 20:23:41.933241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.325 [2024-07-24 20:23:41.933279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:38.325 [2024-07-24 20:23:41.950187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e8088 00:29:38.325 [2024-07-24 20:23:41.952153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.325 [2024-07-24 20:23:41.952192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:38.325 [2024-07-24 20:23:41.966824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190ec840 00:29:38.325 [2024-07-24 20:23:41.969015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.325 [2024-07-24 20:23:41.969052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:38.325 [2024-07-24 20:23:41.983406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e7818 00:29:38.325 [2024-07-24 20:23:41.985814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.325 [2024-07-24 20:23:41.985852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:38.325 [2024-07-24 20:23:41.999964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f9f68 00:29:38.325 [2024-07-24 20:23:42.002588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.325 [2024-07-24 20:23:42.002627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:38.325 [2024-07-24 20:23:42.011163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f35f0 00:29:38.326 [2024-07-24 20:23:42.012378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.326 [2024-07-24 20:23:42.012420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:38.326 [2024-07-24 20:23:42.026374] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f81e0 00:29:38.326 [2024-07-24 20:23:42.027475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.326 [2024-07-24 20:23:42.027515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:38.326 [2024-07-24 20:23:42.043003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e49b0 00:29:38.326 [2024-07-24 20:23:42.044366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.326 [2024-07-24 20:23:42.044405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:38.326 [2024-07-24 20:23:42.059578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f7538 00:29:38.326 [2024-07-24 20:23:42.061134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.326 [2024-07-24 20:23:42.061172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:38.326 [2024-07-24 20:23:42.077206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f9b30 00:29:38.326 [2024-07-24 20:23:42.078954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.326 [2024-07-24 20:23:42.078993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:38.326 [2024-07-24 20:23:42.093599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190eee38 00:29:38.326 [2024-07-24 20:23:42.095610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.326 [2024-07-24 20:23:42.095655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:38.326 [2024-07-24 20:23:42.108403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190ef270 00:29:38.326 [2024-07-24 20:23:42.110218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.326 [2024-07-24 20:23:42.110272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:38.585 [2024-07-24 20:23:42.123294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f8e88 00:29:38.585 [2024-07-24 20:23:42.124892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.585 [2024-07-24 20:23:42.124930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:38.585 [2024-07-24 20:23:42.139578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190ef270 00:29:38.585 [2024-07-24 20:23:42.141103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.585 [2024-07-24 20:23:42.141141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:38.585 [2024-07-24 20:23:42.156092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190ee5c8 00:29:38.585 [2024-07-24 20:23:42.157830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.585 [2024-07-24 20:23:42.157868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:38.585 [2024-07-24 20:23:42.171086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e1f80 00:29:38.585 [2024-07-24 20:23:42.172888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.585 [2024-07-24 20:23:42.172926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:38.585 [2024-07-24 20:23:42.187625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e2c28 00:29:38.585 [2024-07-24 20:23:42.189583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.585 [2024-07-24 20:23:42.189622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:38.585 [2024-07-24 20:23:42.204353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f6458 00:29:38.585 [2024-07-24 20:23:42.206542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.585 [2024-07-24 20:23:42.206580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:38.585 [2024-07-24 20:23:42.220960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f4b08 00:29:38.585 [2024-07-24 20:23:42.223372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.585 [2024-07-24 20:23:42.223410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:38.585 [2024-07-24 20:23:42.237529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190ec408 00:29:38.585 [2024-07-24 20:23:42.240111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.585 [2024-07-24 20:23:42.240149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:38.585 [2024-07-24 20:23:42.248765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f1868 00:29:38.585 [2024-07-24 20:23:42.249866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.585 [2024-07-24 20:23:42.249907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:38.585 [2024-07-24 20:23:42.263686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e5ec8 00:29:38.585 [2024-07-24 20:23:42.264839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.585 [2024-07-24 20:23:42.264881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:38.585 [2024-07-24 20:23:42.280395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f7970 00:29:38.585 [2024-07-24 20:23:42.281716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.585 [2024-07-24 20:23:42.281755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:38.585 [2024-07-24 20:23:42.296976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e7c50 00:29:38.585 [2024-07-24 20:23:42.298491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.585 [2024-07-24 20:23:42.298530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:38.585 [2024-07-24 20:23:42.313600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e1b48 00:29:38.585 [2024-07-24 20:23:42.315315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.585 [2024-07-24 20:23:42.315352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:38.585 [2024-07-24 20:23:42.329130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190efae0 00:29:38.585 [2024-07-24 20:23:42.331420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.585 [2024-07-24 20:23:42.331470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:38.585 [2024-07-24 20:23:42.342670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f1868 00:29:38.585 [2024-07-24 20:23:42.343742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.585 [2024-07-24 20:23:42.343779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:38.585 [2024-07-24 20:23:42.359126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190edd58 00:29:38.585 [2024-07-24 20:23:42.360410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.585 [2024-07-24 20:23:42.360455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:38.844 [2024-07-24 20:23:42.375781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f31b8 00:29:38.844 [2024-07-24 20:23:42.377272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.844 [2024-07-24 20:23:42.377311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:38.844 [2024-07-24 20:23:42.392242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f35f0 00:29:38.844 [2024-07-24 20:23:42.393961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.844 [2024-07-24 20:23:42.393999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:38.844 [2024-07-24 20:23:42.408726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190ea248 00:29:38.844 [2024-07-24 20:23:42.410648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.844 [2024-07-24 20:23:42.410686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:38.844 [2024-07-24 20:23:42.425184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190edd58 00:29:38.844 [2024-07-24 20:23:42.427319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.844 [2024-07-24 20:23:42.427357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:38.844 [2024-07-24 20:23:42.441669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190fb8b8 00:29:38.844 [2024-07-24 20:23:42.444035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.844 [2024-07-24 20:23:42.444072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:38.844 [2024-07-24 20:23:42.458154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190feb58 00:29:38.844 [2024-07-24 20:23:42.460722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.844 [2024-07-24 20:23:42.460760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:38.844 [2024-07-24 20:23:42.469286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e8d30 00:29:38.844 [2024-07-24 20:23:42.470373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.844 [2024-07-24 20:23:42.470410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:38.844 [2024-07-24 20:23:42.484190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190de470 00:29:38.844 [2024-07-24 20:23:42.485265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.844 [2024-07-24 20:23:42.485302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:38.844 [2024-07-24 20:23:42.500668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e7818 00:29:38.844 [2024-07-24 20:23:42.501957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.844 [2024-07-24 20:23:42.502002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:38.844 [2024-07-24 20:23:42.517142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e8088 00:29:38.844 [2024-07-24 20:23:42.518721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.844 [2024-07-24 20:23:42.518759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:38.844 [2024-07-24 20:23:42.533823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e84c0 00:29:38.844 [2024-07-24 20:23:42.535545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.844 [2024-07-24 20:23:42.535583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:38.844 [2024-07-24 20:23:42.550340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e3d08 00:29:38.844 [2024-07-24 20:23:42.552268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.844 [2024-07-24 20:23:42.552306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:38.844 [2024-07-24 20:23:42.566825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e7818 00:29:38.844 [2024-07-24 20:23:42.568966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.844 [2024-07-24 20:23:42.569003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:38.844 [2024-07-24 20:23:42.583283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190fbcf0 00:29:38.844 [2024-07-24 20:23:42.585638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.844 [2024-07-24 20:23:42.585679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:38.844 [2024-07-24 20:23:42.599964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190fd208 00:29:38.844 [2024-07-24 20:23:42.602520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.844 [2024-07-24 20:23:42.602558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:38.844 [2024-07-24 20:23:42.611110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190df550 00:29:38.844 [2024-07-24 20:23:42.612208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.844 [2024-07-24 20:23:42.612245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:38.844 [2024-07-24 20:23:42.626015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190fa3a0 00:29:38.844 [2024-07-24 20:23:42.627101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.844 [2024-07-24 20:23:42.627139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:39.104 [2024-07-24 20:23:42.642615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f0ff8 00:29:39.104 [2024-07-24 20:23:42.643912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.104 [2024-07-24 20:23:42.643951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:39.104 [2024-07-24 20:23:42.659142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f31b8 00:29:39.104 [2024-07-24 20:23:42.660645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.104 [2024-07-24 20:23:42.660682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:39.104 [2024-07-24 20:23:42.675601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f35f0 00:29:39.104 [2024-07-24 20:23:42.677300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.104 [2024-07-24 20:23:42.677339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:39.104 [2024-07-24 20:23:42.692054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190ea248 00:29:39.104 [2024-07-24 20:23:42.693978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.104 [2024-07-24 20:23:42.694015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:39.104 [2024-07-24 20:23:42.708563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f0ff8 00:29:39.104 [2024-07-24 20:23:42.710699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.104 [2024-07-24 20:23:42.710737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:39.104 [2024-07-24 20:23:42.725016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190fb8b8 00:29:39.104 [2024-07-24 20:23:42.727361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.104 [2024-07-24 20:23:42.727399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:39.104 [2024-07-24 20:23:42.741512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190ff3c8 00:29:39.104 [2024-07-24 20:23:42.744061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.104 [2024-07-24 20:23:42.744099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:39.104 [2024-07-24 20:23:42.752658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f92c0 00:29:39.104 [2024-07-24 20:23:42.753740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.104 [2024-07-24 20:23:42.753778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:39.104 [2024-07-24 20:23:42.769133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f4f40 00:29:39.104 [2024-07-24 20:23:42.770501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.104 [2024-07-24 20:23:42.770540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:39.104 [2024-07-24 20:23:42.785824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e0630 00:29:39.104 [2024-07-24 20:23:42.787344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.104 [2024-07-24 20:23:42.787382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:39.104 [2024-07-24 20:23:42.800764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e8088 00:29:39.104 [2024-07-24 20:23:42.802260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.104 [2024-07-24 20:23:42.802297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:39.104 [2024-07-24 20:23:42.817232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e84c0 00:29:39.104 [2024-07-24 20:23:42.818953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.104 [2024-07-24 20:23:42.818991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:39.104 [2024-07-24 20:23:42.833711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190feb58 00:29:39.104 [2024-07-24 20:23:42.835640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.104 [2024-07-24 20:23:42.835678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:39.104 [2024-07-24 20:23:42.850164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e6738 00:29:39.104 [2024-07-24 20:23:42.852298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.104 [2024-07-24 20:23:42.852336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:39.104 [2024-07-24 20:23:42.866676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190de038 00:29:39.104 [2024-07-24 20:23:42.869025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.104 [2024-07-24 20:23:42.869062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:39.104 [2024-07-24 20:23:42.883153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190ef270 00:29:39.104 [2024-07-24 20:23:42.885757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.104 [2024-07-24 20:23:42.885795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:39.363 [2024-07-24 20:23:42.894518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190ebb98 00:29:39.363 [2024-07-24 20:23:42.895607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.363 [2024-07-24 20:23:42.895644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:39.363 [2024-07-24 20:23:42.911006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190dfdc0 00:29:39.363 [2024-07-24 20:23:42.912306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.363 [2024-07-24 20:23:42.912350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:39.363 [2024-07-24 20:23:42.928907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e12d8 00:29:39.363 [2024-07-24 20:23:42.931033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.363 [2024-07-24 20:23:42.931070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:39.363 [2024-07-24 20:23:42.945380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190ed4e8 00:29:39.363 [2024-07-24 20:23:42.947719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.363 [2024-07-24 20:23:42.947756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:39.363 [2024-07-24 20:23:42.961839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f92c0 00:29:39.363 [2024-07-24 20:23:42.964384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.363 [2024-07-24 20:23:42.964421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:39.363 [2024-07-24 20:23:42.972971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190eee38 00:29:39.363 [2024-07-24 20:23:42.974042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.363 [2024-07-24 20:23:42.974079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:39.363 [2024-07-24 20:23:42.987848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e0630 00:29:39.363 [2024-07-24 20:23:42.988905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.363 [2024-07-24 20:23:42.988941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:39.363 [2024-07-24 20:23:43.004287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190dece0 00:29:39.364 [2024-07-24 20:23:43.005563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.364 [2024-07-24 20:23:43.005601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:39.364 [2024-07-24 20:23:43.020958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f0bc0 00:29:39.364 [2024-07-24 20:23:43.022509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.364 [2024-07-24 20:23:43.022547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:39.364 [2024-07-24 20:23:43.037610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f0788 00:29:39.364 [2024-07-24 20:23:43.039306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.364 [2024-07-24 20:23:43.039344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:39.364 [2024-07-24 20:23:43.054066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f3e60 00:29:39.364 [2024-07-24 20:23:43.055989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.364 [2024-07-24 20:23:43.056027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:39.364 [2024-07-24 20:23:43.070616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190dece0 00:29:39.364 [2024-07-24 20:23:43.072737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.364 [2024-07-24 20:23:43.072774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:39.364 [2024-07-24 20:23:43.087087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190ed920 00:29:39.364 [2024-07-24 20:23:43.089435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.364 [2024-07-24 20:23:43.089474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:39.364 [2024-07-24 20:23:43.103556] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190ea680 00:29:39.364 [2024-07-24 20:23:43.106103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.364 [2024-07-24 20:23:43.106141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:39.364 [2024-07-24 20:23:43.114720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190ff3c8 00:29:39.364 [2024-07-24 20:23:43.115796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.364 [2024-07-24 20:23:43.115833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:39.364 [2024-07-24 20:23:43.131229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f1ca0 00:29:39.364 [2024-07-24 20:23:43.132516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.364 [2024-07-24 20:23:43.132553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:39.364 [2024-07-24 20:23:43.147795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e8088 00:29:39.622 [2024-07-24 20:23:43.149315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.622 [2024-07-24 20:23:43.149351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:39.622 [2024-07-24 20:23:43.162806] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e73e0 00:29:39.622 [2024-07-24 20:23:43.164287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.622 [2024-07-24 20:23:43.164324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:39.622 [2024-07-24 20:23:43.179276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e6fa8 00:29:39.622 [2024-07-24 20:23:43.180975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.622 [2024-07-24 20:23:43.181013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:39.622 [2024-07-24 20:23:43.195761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e9e10 00:29:39.622 [2024-07-24 20:23:43.197678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.622 [2024-07-24 20:23:43.197715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:39.622 [2024-07-24 20:23:43.212273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f4298 00:29:39.622 [2024-07-24 20:23:43.214396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.622 [2024-07-24 20:23:43.214443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:39.622 [2024-07-24 20:23:43.228772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190eaef0 00:29:39.623 [2024-07-24 20:23:43.231105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.623 [2024-07-24 20:23:43.231143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:39.623 [2024-07-24 20:23:43.245235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190fe720 00:29:39.623 [2024-07-24 20:23:43.247779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.623 [2024-07-24 20:23:43.247817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:39.623 [2024-07-24 20:23:43.256370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f3a28 00:29:39.623 [2024-07-24 20:23:43.257447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.623 [2024-07-24 20:23:43.257484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:39.623 [2024-07-24 20:23:43.271296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f2d80 00:29:39.623 [2024-07-24 20:23:43.272406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.623 [2024-07-24 20:23:43.272460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:39.623 [2024-07-24 20:23:43.287974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190fe2e8 00:29:39.623 [2024-07-24 20:23:43.289252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.623 [2024-07-24 20:23:43.289291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:39.623 [2024-07-24 20:23:43.304532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f0bc0 00:29:39.623 [2024-07-24 20:23:43.306013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.623 [2024-07-24 20:23:43.306051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:39.623 [2024-07-24 20:23:43.321050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f0788 00:29:39.623 [2024-07-24 20:23:43.322755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.623 [2024-07-24 20:23:43.322803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:39.623 [2024-07-24 20:23:43.337560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190ef6a8 00:29:39.623 [2024-07-24 20:23:43.339467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.623 [2024-07-24 20:23:43.339506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:39.623 [2024-07-24 20:23:43.354082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190fe2e8 00:29:39.623 [2024-07-24 20:23:43.356212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.623 [2024-07-24 20:23:43.356249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:39.623 [2024-07-24 20:23:43.370628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190eaab8 00:29:39.623 [2024-07-24 20:23:43.372956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.623 [2024-07-24 20:23:43.372994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:39.623 [2024-07-24 20:23:43.387162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190fa3a0 00:29:39.623 [2024-07-24 20:23:43.389732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.623 [2024-07-24 20:23:43.389773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:39.623 [2024-07-24 20:23:43.398400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190ec408 00:29:39.623 [2024-07-24 20:23:43.399475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.623 [2024-07-24 20:23:43.399511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:39.880 [2024-07-24 20:23:43.415147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f1ca0 00:29:39.880 [2024-07-24 20:23:43.416441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.880 [2024-07-24 20:23:43.416489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:39.880 [2024-07-24 20:23:43.430075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e3498 00:29:39.880 [2024-07-24 20:23:43.431398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.880 [2024-07-24 20:23:43.431445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:39.880 [2024-07-24 20:23:43.446660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f0788 00:29:39.880 [2024-07-24 20:23:43.448214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.880 [2024-07-24 20:23:43.448251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:39.880 [2024-07-24 20:23:43.464285] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e12d8 00:29:39.880 [2024-07-24 20:23:43.466089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.880 [2024-07-24 20:23:43.466128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:39.880 [2024-07-24 20:23:43.480089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190de038 00:29:39.880 [2024-07-24 20:23:43.481879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.880 [2024-07-24 20:23:43.481917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:39.880 [2024-07-24 20:23:43.494821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f0bc0 00:29:39.880 [2024-07-24 20:23:43.496564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.880 [2024-07-24 20:23:43.496601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:39.880 [2024-07-24 20:23:43.509609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e9e10 00:29:39.880 [2024-07-24 20:23:43.510741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.880 [2024-07-24 20:23:43.510779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:39.880 [2024-07-24 20:23:43.525620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f46d0 00:29:39.880 [2024-07-24 20:23:43.526505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.880 [2024-07-24 20:23:43.526542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:39.880 [2024-07-24 20:23:43.542441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e38d0 00:29:39.880 [2024-07-24 20:23:43.543498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.880 [2024-07-24 20:23:43.543536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:39.880 [2024-07-24 20:23:43.558998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f7100 00:29:39.880 [2024-07-24 20:23:43.560320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.880 [2024-07-24 20:23:43.560357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:39.880 [2024-07-24 20:23:43.577145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190ee5c8 00:29:39.881 [2024-07-24 20:23:43.579773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.881 [2024-07-24 20:23:43.579809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:39.881 [2024-07-24 20:23:43.588358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190dece0 00:29:39.881 [2024-07-24 20:23:43.589492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.881 [2024-07-24 20:23:43.589529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:39.881 [2024-07-24 20:23:43.603331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e7818 00:29:39.881 [2024-07-24 20:23:43.604461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.881 [2024-07-24 20:23:43.604498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:39.881 [2024-07-24 20:23:43.621133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190eaef0 00:29:39.881 [2024-07-24 20:23:43.622509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.881 [2024-07-24 20:23:43.622546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:39.881 [2024-07-24 20:23:43.637519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f92c0 00:29:39.881 [2024-07-24 20:23:43.639071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.881 [2024-07-24 20:23:43.639108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:39.881 [2024-07-24 20:23:43.653606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190e8d30 00:29:39.881 [2024-07-24 20:23:43.655190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.881 [2024-07-24 20:23:43.655227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:40.138 [2024-07-24 20:23:43.669611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190ea248 00:29:40.138 [2024-07-24 20:23:43.671197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.138 [2024-07-24 20:23:43.671233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:40.138 [2024-07-24 20:23:43.685371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190fac10 00:29:40.138 [2024-07-24 20:23:43.686961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.138 [2024-07-24 20:23:43.686997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:40.138 [2024-07-24 20:23:43.701131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190fe720 00:29:40.138 [2024-07-24 20:23:43.702723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.138 [2024-07-24 20:23:43.702770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:40.138 [2024-07-24 20:23:43.718925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190eff18 00:29:40.138 [2024-07-24 20:23:43.721312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.138 [2024-07-24 20:23:43.721348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:40.138 [2024-07-24 20:23:43.733670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf740) with pdu=0x2000190f31b8 00:29:40.138 [2024-07-24 20:23:43.735425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.138 [2024-07-24 20:23:43.735484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:40.138 00:29:40.138 Latency(us) 00:29:40.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.138 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:40.138 nvme0n1 : 2.01 16102.21 62.90 0.00 0.00 7934.01 3252.53 19806.44 00:29:40.138 =================================================================================================================== 00:29:40.138 Total : 16102.21 62.90 0.00 0.00 7934.01 3252.53 19806.44 00:29:40.138 0 00:29:40.138 20:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:40.138 20:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:40.138 20:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:40.138 | .driver_specific 00:29:40.138 | .nvme_error 00:29:40.138 | .status_code 00:29:40.138 | .command_transient_transport_error' 00:29:40.138 20:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:40.704 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 126 > 0 )) 00:29:40.704 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2165065 00:29:40.704 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2165065 ']' 00:29:40.704 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2165065 00:29:40.704 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:40.704 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:40.704 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2165065 00:29:40.704 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:40.704 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:40.704 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2165065' 00:29:40.704 killing process with pid 2165065 00:29:40.704 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2165065 00:29:40.704 Received shutdown signal, test time was about 2.000000 seconds 00:29:40.704 00:29:40.704 Latency(us) 00:29:40.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.704 =================================================================================================================== 00:29:40.704 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:40.704 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2165065 00:29:40.962 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:40.962 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:40.962 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:40.962 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:40.962 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:40.962 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2165604 00:29:40.962 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:40.962 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2165604 /var/tmp/bperf.sock 00:29:40.962 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2165604 ']' 00:29:40.962 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:40.962 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:40.962 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:40.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:40.962 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:40.962 20:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:40.962 [2024-07-24 20:23:44.716435] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:29:40.962 [2024-07-24 20:23:44.716540] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2165604 ] 00:29:40.962 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:40.962 Zero copy mechanism will not be used. 00:29:41.221 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.221 [2024-07-24 20:23:44.793521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.221 [2024-07-24 20:23:44.935783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.479 20:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:41.479 20:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:41.479 20:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:41.479 20:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:41.738 20:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:41.738 20:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.738 20:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:41.738 20:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.738 20:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:41.738 20:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:42.304 nvme0n1 00:29:42.304 20:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:42.304 20:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.304 20:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:42.304 20:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.304 20:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:42.304 20:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:42.562 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:42.562 Zero copy mechanism will not be used. 00:29:42.562 Running I/O for 2 seconds... 00:29:42.562 [2024-07-24 20:23:46.230542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.563 [2024-07-24 20:23:46.230991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.563 [2024-07-24 20:23:46.231042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.563 [2024-07-24 20:23:46.239039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.563 [2024-07-24 20:23:46.239165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.563 [2024-07-24 20:23:46.239205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.563 [2024-07-24 20:23:46.247249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.563 [2024-07-24 20:23:46.247677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.563 [2024-07-24 20:23:46.247717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.563 [2024-07-24 20:23:46.255980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.563 [2024-07-24 20:23:46.256499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.563 [2024-07-24 20:23:46.256538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.563 [2024-07-24 20:23:46.264816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.563 [2024-07-24 20:23:46.265329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.563 [2024-07-24 20:23:46.265367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.563 [2024-07-24 20:23:46.274283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.563 [2024-07-24 20:23:46.274800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.563 [2024-07-24 20:23:46.274840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.563 [2024-07-24 20:23:46.282901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.563 [2024-07-24 20:23:46.283412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.563 [2024-07-24 20:23:46.283463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.563 [2024-07-24 20:23:46.291153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.563 [2024-07-24 20:23:46.291622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.563 [2024-07-24 20:23:46.291661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.563 [2024-07-24 20:23:46.299601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.563 [2024-07-24 20:23:46.300122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.563 [2024-07-24 20:23:46.300161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.563 [2024-07-24 20:23:46.309278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.563 [2024-07-24 20:23:46.309791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.563 [2024-07-24 20:23:46.309830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.563 [2024-07-24 20:23:46.319012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.563 [2024-07-24 20:23:46.319516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.563 [2024-07-24 20:23:46.319555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.563 [2024-07-24 20:23:46.328553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.563 [2024-07-24 20:23:46.329063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.563 [2024-07-24 20:23:46.329102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.563 [2024-07-24 20:23:46.338337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.563 [2024-07-24 20:23:46.338852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.563 [2024-07-24 20:23:46.338892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.822 [2024-07-24 20:23:46.348537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.822 [2024-07-24 20:23:46.349067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.822 [2024-07-24 20:23:46.349107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.822 [2024-07-24 20:23:46.359874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.822 [2024-07-24 20:23:46.360360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.822 [2024-07-24 20:23:46.360400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.822 [2024-07-24 20:23:46.370545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.822 [2024-07-24 20:23:46.370979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.822 [2024-07-24 20:23:46.371018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.822 [2024-07-24 20:23:46.381914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.822 [2024-07-24 20:23:46.382380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.822 [2024-07-24 20:23:46.382419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.822 [2024-07-24 20:23:46.391291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.822 [2024-07-24 20:23:46.391811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.822 [2024-07-24 20:23:46.391851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.822 [2024-07-24 20:23:46.402720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.822 [2024-07-24 20:23:46.403247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.822 [2024-07-24 20:23:46.403285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.823 [2024-07-24 20:23:46.411155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.823 [2024-07-24 20:23:46.411691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.823 [2024-07-24 20:23:46.411729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.823 [2024-07-24 20:23:46.419894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.823 [2024-07-24 20:23:46.420419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.823 [2024-07-24 20:23:46.420467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.823 [2024-07-24 20:23:46.428235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.823 [2024-07-24 20:23:46.428658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.823 [2024-07-24 20:23:46.428697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.823 [2024-07-24 20:23:46.437080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.823 [2024-07-24 20:23:46.437503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.823 [2024-07-24 20:23:46.437542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.823 [2024-07-24 20:23:46.445070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.823 [2024-07-24 20:23:46.445594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.823 [2024-07-24 20:23:46.445633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.823 [2024-07-24 20:23:46.453482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.823 [2024-07-24 20:23:46.453899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.823 [2024-07-24 20:23:46.453937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.823 [2024-07-24 20:23:46.462106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.823 [2024-07-24 20:23:46.462531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.823 [2024-07-24 20:23:46.462577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.823 [2024-07-24 20:23:46.470569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.823 [2024-07-24 20:23:46.471092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.823 [2024-07-24 20:23:46.471130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.823 [2024-07-24 20:23:46.480539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.823 [2024-07-24 20:23:46.481075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.823 [2024-07-24 20:23:46.481114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.823 [2024-07-24 20:23:46.489803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.823 [2024-07-24 20:23:46.490308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.823 [2024-07-24 20:23:46.490347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.823 [2024-07-24 20:23:46.498955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.823 [2024-07-24 20:23:46.499465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.823 [2024-07-24 20:23:46.499504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.823 [2024-07-24 20:23:46.508952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.823 [2024-07-24 20:23:46.509457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.823 [2024-07-24 20:23:46.509495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.823 [2024-07-24 20:23:46.518534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.823 [2024-07-24 20:23:46.519044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.823 [2024-07-24 20:23:46.519082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.823 [2024-07-24 20:23:46.528186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.823 [2024-07-24 20:23:46.528693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.823 [2024-07-24 20:23:46.528731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.823 [2024-07-24 20:23:46.538841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.823 [2024-07-24 20:23:46.539354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.823 [2024-07-24 20:23:46.539392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.823 [2024-07-24 20:23:46.547728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.823 [2024-07-24 20:23:46.548143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.823 [2024-07-24 20:23:46.548181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.823 [2024-07-24 20:23:46.556406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.823 [2024-07-24 20:23:46.556923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.823 [2024-07-24 20:23:46.556961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.823 [2024-07-24 20:23:46.565089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.823 [2024-07-24 20:23:46.565624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.823 [2024-07-24 20:23:46.565662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.823 [2024-07-24 20:23:46.573332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.823 [2024-07-24 20:23:46.573751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.823 [2024-07-24 20:23:46.573789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.823 [2024-07-24 20:23:46.581822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.823 [2024-07-24 20:23:46.582328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.823 [2024-07-24 20:23:46.582366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.823 [2024-07-24 20:23:46.589730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.823 [2024-07-24 20:23:46.590141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.823 [2024-07-24 20:23:46.590179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.823 [2024-07-24 20:23:46.597390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.823 [2024-07-24 20:23:46.597846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.823 [2024-07-24 20:23:46.597886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.823 [2024-07-24 20:23:46.605420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:42.823 [2024-07-24 20:23:46.605886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.823 [2024-07-24 20:23:46.605925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.083 [2024-07-24 20:23:46.613643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.083 [2024-07-24 20:23:46.614125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.083 [2024-07-24 20:23:46.614175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.083 [2024-07-24 20:23:46.622203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.083 [2024-07-24 20:23:46.622712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.083 [2024-07-24 20:23:46.622751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.083 [2024-07-24 20:23:46.630268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.083 [2024-07-24 20:23:46.630779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.083 [2024-07-24 20:23:46.630818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.083 [2024-07-24 20:23:46.638953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.083 [2024-07-24 20:23:46.639364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.083 [2024-07-24 20:23:46.639404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.083 [2024-07-24 20:23:46.647244] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.083 [2024-07-24 20:23:46.647737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.083 [2024-07-24 20:23:46.647776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.083 [2024-07-24 20:23:46.656012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.083 [2024-07-24 20:23:46.656513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.083 [2024-07-24 20:23:46.656551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.083 [2024-07-24 20:23:46.665240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.083 [2024-07-24 20:23:46.665739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.083 [2024-07-24 20:23:46.665777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.083 [2024-07-24 20:23:46.674525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.083 [2024-07-24 20:23:46.675012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.083 [2024-07-24 20:23:46.675049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.083 [2024-07-24 20:23:46.683348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.083 [2024-07-24 20:23:46.683827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.083 [2024-07-24 20:23:46.683865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.083 [2024-07-24 20:23:46.692099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.083 [2024-07-24 20:23:46.692629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.083 [2024-07-24 20:23:46.692668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.083 [2024-07-24 20:23:46.701671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.083 [2024-07-24 20:23:46.702176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.083 [2024-07-24 20:23:46.702216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.083 [2024-07-24 20:23:46.710907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.083 [2024-07-24 20:23:46.711394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.083 [2024-07-24 20:23:46.711441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.083 [2024-07-24 20:23:46.719421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.083 [2024-07-24 20:23:46.719847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.083 [2024-07-24 20:23:46.719886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.083 [2024-07-24 20:23:46.728603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.083 [2024-07-24 20:23:46.729117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.083 [2024-07-24 20:23:46.729155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.083 [2024-07-24 20:23:46.737712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.083 [2024-07-24 20:23:46.738129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.083 [2024-07-24 20:23:46.738167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.083 [2024-07-24 20:23:46.746560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.083 [2024-07-24 20:23:46.747072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.083 [2024-07-24 20:23:46.747111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.084 [2024-07-24 20:23:46.755521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.084 [2024-07-24 20:23:46.755940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.084 [2024-07-24 20:23:46.755983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.084 [2024-07-24 20:23:46.764326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.084 [2024-07-24 20:23:46.764822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.084 [2024-07-24 20:23:46.764861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.084 [2024-07-24 20:23:46.772730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.084 [2024-07-24 20:23:46.773227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.084 [2024-07-24 20:23:46.773264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.084 [2024-07-24 20:23:46.781621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.084 [2024-07-24 20:23:46.782120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.084 [2024-07-24 20:23:46.782157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.084 [2024-07-24 20:23:46.791287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.084 [2024-07-24 20:23:46.791786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.084 [2024-07-24 20:23:46.791826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.084 [2024-07-24 20:23:46.800282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.084 [2024-07-24 20:23:46.800808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.084 [2024-07-24 20:23:46.800847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.084 [2024-07-24 20:23:46.809093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.084 [2024-07-24 20:23:46.809543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.084 [2024-07-24 20:23:46.809583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.084 [2024-07-24 20:23:46.817027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.084 [2024-07-24 20:23:46.817480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.084 [2024-07-24 20:23:46.817521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.084 [2024-07-24 20:23:46.824862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.084 [2024-07-24 20:23:46.825313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.084 [2024-07-24 20:23:46.825352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.084 [2024-07-24 20:23:46.832549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.084 [2024-07-24 20:23:46.832992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.084 [2024-07-24 20:23:46.833031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.084 [2024-07-24 20:23:46.840288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.084 [2024-07-24 20:23:46.840712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.084 [2024-07-24 20:23:46.840760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.084 [2024-07-24 20:23:46.848829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.084 [2024-07-24 20:23:46.849251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.084 [2024-07-24 20:23:46.849289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.084 [2024-07-24 20:23:46.858043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.084 [2024-07-24 20:23:46.858473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.084 [2024-07-24 20:23:46.858512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.084 [2024-07-24 20:23:46.866445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.084 [2024-07-24 20:23:46.866870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.084 [2024-07-24 20:23:46.866908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.343 [2024-07-24 20:23:46.875802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.343 [2024-07-24 20:23:46.876218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.343 [2024-07-24 20:23:46.876256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.343 [2024-07-24 20:23:46.886051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.343 [2024-07-24 20:23:46.886469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.343 [2024-07-24 20:23:46.886508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.343 [2024-07-24 20:23:46.894712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.343 [2024-07-24 20:23:46.894913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.343 [2024-07-24 20:23:46.894951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.343 [2024-07-24 20:23:46.904713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.343 [2024-07-24 20:23:46.905154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.343 [2024-07-24 20:23:46.905192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.343 [2024-07-24 20:23:46.912922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.343 [2024-07-24 20:23:46.913329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.343 [2024-07-24 20:23:46.913368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.343 [2024-07-24 20:23:46.920551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.343 [2024-07-24 20:23:46.920968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.343 [2024-07-24 20:23:46.921007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.343 [2024-07-24 20:23:46.928222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.343 [2024-07-24 20:23:46.928676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.343 [2024-07-24 20:23:46.928715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.343 [2024-07-24 20:23:46.936408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.343 [2024-07-24 20:23:46.936835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.343 [2024-07-24 20:23:46.936873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.343 [2024-07-24 20:23:46.944918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.343 [2024-07-24 20:23:46.945420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.343 [2024-07-24 20:23:46.945478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.343 [2024-07-24 20:23:46.954900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.343 [2024-07-24 20:23:46.955319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.343 [2024-07-24 20:23:46.955356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.343 [2024-07-24 20:23:46.963535] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.343 [2024-07-24 20:23:46.963919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.343 [2024-07-24 20:23:46.963956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.343 [2024-07-24 20:23:46.971508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.343 [2024-07-24 20:23:46.971892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.343 [2024-07-24 20:23:46.971930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.343 [2024-07-24 20:23:46.980382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.343 [2024-07-24 20:23:46.980862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.343 [2024-07-24 20:23:46.980902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.343 [2024-07-24 20:23:46.989335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.343 [2024-07-24 20:23:46.989717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.343 [2024-07-24 20:23:46.989755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.343 [2024-07-24 20:23:46.997587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.343 [2024-07-24 20:23:46.997954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.344 [2024-07-24 20:23:46.997993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.344 [2024-07-24 20:23:47.005335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.344 [2024-07-24 20:23:47.005721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.344 [2024-07-24 20:23:47.005761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.344 [2024-07-24 20:23:47.012644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.344 [2024-07-24 20:23:47.013011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.344 [2024-07-24 20:23:47.013049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.344 [2024-07-24 20:23:47.019762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.344 [2024-07-24 20:23:47.020196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.344 [2024-07-24 20:23:47.020244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.344 [2024-07-24 20:23:47.027039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.344 [2024-07-24 20:23:47.027478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.344 [2024-07-24 20:23:47.027516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.344 [2024-07-24 20:23:47.035521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.344 [2024-07-24 20:23:47.035921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.344 [2024-07-24 20:23:47.035958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.344 [2024-07-24 20:23:47.043700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.344 [2024-07-24 20:23:47.044155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.344 [2024-07-24 20:23:47.044193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.344 [2024-07-24 20:23:47.050976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.344 [2024-07-24 20:23:47.051353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.344 [2024-07-24 20:23:47.051391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.344 [2024-07-24 20:23:47.058174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.344 [2024-07-24 20:23:47.058620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.344 [2024-07-24 20:23:47.058666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.344 [2024-07-24 20:23:47.065970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.344 [2024-07-24 20:23:47.066331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.344 [2024-07-24 20:23:47.066369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.344 [2024-07-24 20:23:47.073731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.344 [2024-07-24 20:23:47.074095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.344 [2024-07-24 20:23:47.074132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.344 [2024-07-24 20:23:47.081364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.344 [2024-07-24 20:23:47.081734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.344 [2024-07-24 20:23:47.081772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.344 [2024-07-24 20:23:47.089019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.344 [2024-07-24 20:23:47.089379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.344 [2024-07-24 20:23:47.089416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.344 [2024-07-24 20:23:47.096957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.344 [2024-07-24 20:23:47.097316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.344 [2024-07-24 20:23:47.097354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.344 [2024-07-24 20:23:47.104158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.344 [2024-07-24 20:23:47.104527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.344 [2024-07-24 20:23:47.104565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.344 [2024-07-24 20:23:47.111341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.344 [2024-07-24 20:23:47.111714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.344 [2024-07-24 20:23:47.111752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.344 [2024-07-24 20:23:47.118521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.344 [2024-07-24 20:23:47.118881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.344 [2024-07-24 20:23:47.118919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.344 [2024-07-24 20:23:47.125708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.344 [2024-07-24 20:23:47.126080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.344 [2024-07-24 20:23:47.126118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.604 [2024-07-24 20:23:47.133077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.604 [2024-07-24 20:23:47.133457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.604 [2024-07-24 20:23:47.133494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.604 [2024-07-24 20:23:47.140096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.604 [2024-07-24 20:23:47.140571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.604 [2024-07-24 20:23:47.140609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.604 [2024-07-24 20:23:47.148025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.604 [2024-07-24 20:23:47.148390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.604 [2024-07-24 20:23:47.148436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.604 [2024-07-24 20:23:47.155553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.604 [2024-07-24 20:23:47.155988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.604 [2024-07-24 20:23:47.156025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.604 [2024-07-24 20:23:47.162750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.604 [2024-07-24 20:23:47.163132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.604 [2024-07-24 20:23:47.163170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.604 [2024-07-24 20:23:47.170303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.604 [2024-07-24 20:23:47.170740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.604 [2024-07-24 20:23:47.170779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.604 [2024-07-24 20:23:47.179527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.604 [2024-07-24 20:23:47.179889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.604 [2024-07-24 20:23:47.179926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.604 [2024-07-24 20:23:47.187163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.604 [2024-07-24 20:23:47.187538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.604 [2024-07-24 20:23:47.187576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.604 [2024-07-24 20:23:47.194363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.604 [2024-07-24 20:23:47.194734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.604 [2024-07-24 20:23:47.194771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.604 [2024-07-24 20:23:47.202300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.604 [2024-07-24 20:23:47.202697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.604 [2024-07-24 20:23:47.202735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.604 [2024-07-24 20:23:47.209416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.604 [2024-07-24 20:23:47.209789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.604 [2024-07-24 20:23:47.209826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.604 [2024-07-24 20:23:47.216580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.604 [2024-07-24 20:23:47.216953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.604 [2024-07-24 20:23:47.216991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.604 [2024-07-24 20:23:47.224192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.604 [2024-07-24 20:23:47.224560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.604 [2024-07-24 20:23:47.224598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.604 [2024-07-24 20:23:47.231509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.604 [2024-07-24 20:23:47.231910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.604 [2024-07-24 20:23:47.231945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.604 [2024-07-24 20:23:47.238625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.604 [2024-07-24 20:23:47.239000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.604 [2024-07-24 20:23:47.239037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.605 [2024-07-24 20:23:47.245791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.605 [2024-07-24 20:23:47.246163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.605 [2024-07-24 20:23:47.246202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.605 [2024-07-24 20:23:47.254065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.605 [2024-07-24 20:23:47.254511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.605 [2024-07-24 20:23:47.254556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.605 [2024-07-24 20:23:47.261294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.605 [2024-07-24 20:23:47.261683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.605 [2024-07-24 20:23:47.261729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.605 [2024-07-24 20:23:47.268558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.605 [2024-07-24 20:23:47.268925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.605 [2024-07-24 20:23:47.268963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.605 [2024-07-24 20:23:47.275759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.605 [2024-07-24 20:23:47.276146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.605 [2024-07-24 20:23:47.276184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.605 [2024-07-24 20:23:47.282963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.605 [2024-07-24 20:23:47.283327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.605 [2024-07-24 20:23:47.283364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.605 [2024-07-24 20:23:47.290125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.605 [2024-07-24 20:23:47.290546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.605 [2024-07-24 20:23:47.290584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.605 [2024-07-24 20:23:47.297358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.605 [2024-07-24 20:23:47.297807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.605 [2024-07-24 20:23:47.297846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.605 [2024-07-24 20:23:47.304560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.605 [2024-07-24 20:23:47.304928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.605 [2024-07-24 20:23:47.304966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.605 [2024-07-24 20:23:47.311811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.605 [2024-07-24 20:23:47.312174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.605 [2024-07-24 20:23:47.312211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.605 [2024-07-24 20:23:47.318897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.605 [2024-07-24 20:23:47.319268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.605 [2024-07-24 20:23:47.319306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.605 [2024-07-24 20:23:47.326123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.605 [2024-07-24 20:23:47.326581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.605 [2024-07-24 20:23:47.326619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.605 [2024-07-24 20:23:47.333483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.605 [2024-07-24 20:23:47.333859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.605 [2024-07-24 20:23:47.333897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.605 [2024-07-24 20:23:47.340766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.605 [2024-07-24 20:23:47.341142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.605 [2024-07-24 20:23:47.341180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.605 [2024-07-24 20:23:47.348677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.605 [2024-07-24 20:23:47.349127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.605 [2024-07-24 20:23:47.349165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.605 [2024-07-24 20:23:47.355872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.605 [2024-07-24 20:23:47.356261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.605 [2024-07-24 20:23:47.356299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.605 [2024-07-24 20:23:47.363093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.605 [2024-07-24 20:23:47.363472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.605 [2024-07-24 20:23:47.363511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.605 [2024-07-24 20:23:47.370267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.605 [2024-07-24 20:23:47.370649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.605 [2024-07-24 20:23:47.370687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.605 [2024-07-24 20:23:47.377462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.605 [2024-07-24 20:23:47.377861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.605 [2024-07-24 20:23:47.377905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.605 [2024-07-24 20:23:47.384686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.605 [2024-07-24 20:23:47.385060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.605 [2024-07-24 20:23:47.385096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.865 [2024-07-24 20:23:47.392609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.865 [2024-07-24 20:23:47.393021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.865 [2024-07-24 20:23:47.393060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.865 [2024-07-24 20:23:47.400309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.865 [2024-07-24 20:23:47.400687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.865 [2024-07-24 20:23:47.400725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.865 [2024-07-24 20:23:47.407877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.865 [2024-07-24 20:23:47.408246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.865 [2024-07-24 20:23:47.408284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.865 [2024-07-24 20:23:47.416068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.865 [2024-07-24 20:23:47.416495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.865 [2024-07-24 20:23:47.416533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.865 [2024-07-24 20:23:47.425417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.865 [2024-07-24 20:23:47.425850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.865 [2024-07-24 20:23:47.425887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.865 [2024-07-24 20:23:47.433876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.865 [2024-07-24 20:23:47.434272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.865 [2024-07-24 20:23:47.434310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.865 [2024-07-24 20:23:47.443168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.865 [2024-07-24 20:23:47.443536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.865 [2024-07-24 20:23:47.443574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.865 [2024-07-24 20:23:47.452087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.865 [2024-07-24 20:23:47.452465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.865 [2024-07-24 20:23:47.452504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.865 [2024-07-24 20:23:47.461458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.865 [2024-07-24 20:23:47.461886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.865 [2024-07-24 20:23:47.461925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.865 [2024-07-24 20:23:47.470769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.865 [2024-07-24 20:23:47.471227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.865 [2024-07-24 20:23:47.471265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.865 [2024-07-24 20:23:47.479400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.865 [2024-07-24 20:23:47.479876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.865 [2024-07-24 20:23:47.479914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.865 [2024-07-24 20:23:47.487327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.865 [2024-07-24 20:23:47.487700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.865 [2024-07-24 20:23:47.487738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.865 [2024-07-24 20:23:47.494580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.865 [2024-07-24 20:23:47.494957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.865 [2024-07-24 20:23:47.494995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.865 [2024-07-24 20:23:47.501843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.865 [2024-07-24 20:23:47.502216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.865 [2024-07-24 20:23:47.502254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.865 [2024-07-24 20:23:47.509453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.865 [2024-07-24 20:23:47.509820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.865 [2024-07-24 20:23:47.509859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.865 [2024-07-24 20:23:47.516627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.865 [2024-07-24 20:23:47.516989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.865 [2024-07-24 20:23:47.517027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.865 [2024-07-24 20:23:47.525614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.865 [2024-07-24 20:23:47.526031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.865 [2024-07-24 20:23:47.526070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.865 [2024-07-24 20:23:47.534121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.865 [2024-07-24 20:23:47.534491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.865 [2024-07-24 20:23:47.534530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.865 [2024-07-24 20:23:47.542495] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.865 [2024-07-24 20:23:47.542865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.865 [2024-07-24 20:23:47.542903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.865 [2024-07-24 20:23:47.550517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.865 [2024-07-24 20:23:47.550879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.865 [2024-07-24 20:23:47.550916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.865 [2024-07-24 20:23:47.558961] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.865 [2024-07-24 20:23:47.559407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.865 [2024-07-24 20:23:47.559454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.865 [2024-07-24 20:23:47.567684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.865 [2024-07-24 20:23:47.568046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.865 [2024-07-24 20:23:47.568084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.866 [2024-07-24 20:23:47.575889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.866 [2024-07-24 20:23:47.576250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.866 [2024-07-24 20:23:47.576288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.866 [2024-07-24 20:23:47.584533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.866 [2024-07-24 20:23:47.584962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.866 [2024-07-24 20:23:47.584999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.866 [2024-07-24 20:23:47.593282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.866 [2024-07-24 20:23:47.593719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.866 [2024-07-24 20:23:47.593766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.866 [2024-07-24 20:23:47.602593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.866 [2024-07-24 20:23:47.602954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.866 [2024-07-24 20:23:47.602991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.866 [2024-07-24 20:23:47.610562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.866 [2024-07-24 20:23:47.610925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.866 [2024-07-24 20:23:47.610962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.866 [2024-07-24 20:23:47.617985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.866 [2024-07-24 20:23:47.618367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.866 [2024-07-24 20:23:47.618404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.866 [2024-07-24 20:23:47.625316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.866 [2024-07-24 20:23:47.625704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.866 [2024-07-24 20:23:47.625741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.866 [2024-07-24 20:23:47.632937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.866 [2024-07-24 20:23:47.633330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.866 [2024-07-24 20:23:47.633367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.866 [2024-07-24 20:23:47.640744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.866 [2024-07-24 20:23:47.641104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.866 [2024-07-24 20:23:47.641141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.866 [2024-07-24 20:23:47.648766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:43.866 [2024-07-24 20:23:47.649141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.866 [2024-07-24 20:23:47.649178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.125 [2024-07-24 20:23:47.656406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.125 [2024-07-24 20:23:47.656790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.125 [2024-07-24 20:23:47.656828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.125 [2024-07-24 20:23:47.664726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.125 [2024-07-24 20:23:47.665107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.125 [2024-07-24 20:23:47.665144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.125 [2024-07-24 20:23:47.672996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.125 [2024-07-24 20:23:47.673343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.125 [2024-07-24 20:23:47.673380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.125 [2024-07-24 20:23:47.680255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.125 [2024-07-24 20:23:47.680636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.125 [2024-07-24 20:23:47.680673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.125 [2024-07-24 20:23:47.687726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.125 [2024-07-24 20:23:47.688073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.125 [2024-07-24 20:23:47.688111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.125 [2024-07-24 20:23:47.695092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.695443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.695481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.702876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.703281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.703318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.710836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.711182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.711219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.717695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.718040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.718077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.724583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.724926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.724963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.731419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.731776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.731814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.738218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.738572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.738610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.745159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.745515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.745553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.751856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.752199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.752236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.759641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.760022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.760061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.768180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.768643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.768682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.777010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.777363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.777401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.784955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.785299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.785337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.793205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.793559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.793606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.801329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.801697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.801735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.809463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.809811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.809849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.819026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.819380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.819418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.826316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.826678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.826717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.833754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.834178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.834215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.841324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.841680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.841718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.848590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.848951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.848989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.856030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.856387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.856426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.863646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.864006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.864044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.871317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.871682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.871721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.880095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.880464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.880502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.887257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.126 [2024-07-24 20:23:47.887622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.126 [2024-07-24 20:23:47.887660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.126 [2024-07-24 20:23:47.894561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.127 [2024-07-24 20:23:47.894915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.127 [2024-07-24 20:23:47.894953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.127 [2024-07-24 20:23:47.902321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.127 [2024-07-24 20:23:47.902683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.127 [2024-07-24 20:23:47.902721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.386 [2024-07-24 20:23:47.910419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.386 [2024-07-24 20:23:47.910797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.386 [2024-07-24 20:23:47.910834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.386 [2024-07-24 20:23:47.918799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.386 [2024-07-24 20:23:47.919208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.386 [2024-07-24 20:23:47.919246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.386 [2024-07-24 20:23:47.927280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.386 [2024-07-24 20:23:47.927704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.386 [2024-07-24 20:23:47.927751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.386 [2024-07-24 20:23:47.935659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.386 [2024-07-24 20:23:47.936018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.386 [2024-07-24 20:23:47.936056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.386 [2024-07-24 20:23:47.943995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.386 [2024-07-24 20:23:47.944348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.386 [2024-07-24 20:23:47.944387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.386 [2024-07-24 20:23:47.951918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.386 [2024-07-24 20:23:47.952269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.386 [2024-07-24 20:23:47.952307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.386 [2024-07-24 20:23:47.960682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.386 [2024-07-24 20:23:47.961080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.386 [2024-07-24 20:23:47.961118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.386 [2024-07-24 20:23:47.970258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.386 [2024-07-24 20:23:47.970649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.386 [2024-07-24 20:23:47.970696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.386 [2024-07-24 20:23:47.979974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.386 [2024-07-24 20:23:47.980396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.386 [2024-07-24 20:23:47.980442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.387 [2024-07-24 20:23:47.989737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.387 [2024-07-24 20:23:47.990119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.387 [2024-07-24 20:23:47.990156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.387 [2024-07-24 20:23:47.998924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.387 [2024-07-24 20:23:47.999288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.387 [2024-07-24 20:23:47.999325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.387 [2024-07-24 20:23:48.008551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.387 [2024-07-24 20:23:48.008918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.387 [2024-07-24 20:23:48.008957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.387 [2024-07-24 20:23:48.016475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.387 [2024-07-24 20:23:48.016835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.387 [2024-07-24 20:23:48.016875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.387 [2024-07-24 20:23:48.024652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.387 [2024-07-24 20:23:48.025009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.387 [2024-07-24 20:23:48.025049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.387 [2024-07-24 20:23:48.031962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.387 [2024-07-24 20:23:48.032315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.387 [2024-07-24 20:23:48.032353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.387 [2024-07-24 20:23:48.038668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.387 [2024-07-24 20:23:48.039021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.387 [2024-07-24 20:23:48.039059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.387 [2024-07-24 20:23:48.046127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.387 [2024-07-24 20:23:48.046496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.387 [2024-07-24 20:23:48.046535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.387 [2024-07-24 20:23:48.054326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.387 [2024-07-24 20:23:48.054689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.387 [2024-07-24 20:23:48.054729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.387 [2024-07-24 20:23:48.062345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.387 [2024-07-24 20:23:48.062704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.387 [2024-07-24 20:23:48.062742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.387 [2024-07-24 20:23:48.070821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.387 [2024-07-24 20:23:48.071176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.387 [2024-07-24 20:23:48.071214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.387 [2024-07-24 20:23:48.078951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.387 [2024-07-24 20:23:48.079309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.387 [2024-07-24 20:23:48.079346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.387 [2024-07-24 20:23:48.087734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.387 [2024-07-24 20:23:48.088092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.387 [2024-07-24 20:23:48.088131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.387 [2024-07-24 20:23:48.095816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.387 [2024-07-24 20:23:48.096172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.387 [2024-07-24 20:23:48.096210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.387 [2024-07-24 20:23:48.104167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.387 [2024-07-24 20:23:48.104531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.387 [2024-07-24 20:23:48.104570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.387 [2024-07-24 20:23:48.112024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.387 [2024-07-24 20:23:48.112397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.387 [2024-07-24 20:23:48.112444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.387 [2024-07-24 20:23:48.120051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.387 [2024-07-24 20:23:48.120406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.387 [2024-07-24 20:23:48.120454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.387 [2024-07-24 20:23:48.128503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.387 [2024-07-24 20:23:48.128862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.387 [2024-07-24 20:23:48.128901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.387 [2024-07-24 20:23:48.136326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.387 [2024-07-24 20:23:48.136681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.387 [2024-07-24 20:23:48.136719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.387 [2024-07-24 20:23:48.144205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.387 [2024-07-24 20:23:48.144572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.387 [2024-07-24 20:23:48.144619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.387 [2024-07-24 20:23:48.152347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.387 [2024-07-24 20:23:48.152708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.387 [2024-07-24 20:23:48.152746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.387 [2024-07-24 20:23:48.160444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.387 [2024-07-24 20:23:48.160800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.387 [2024-07-24 20:23:48.160839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.387 [2024-07-24 20:23:48.168678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.387 [2024-07-24 20:23:48.169050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.387 [2024-07-24 20:23:48.169088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.646 [2024-07-24 20:23:48.176625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.646 [2024-07-24 20:23:48.176981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.646 [2024-07-24 20:23:48.177019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.646 [2024-07-24 20:23:48.184853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.646 [2024-07-24 20:23:48.185211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.646 [2024-07-24 20:23:48.185249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.646 [2024-07-24 20:23:48.192921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.646 [2024-07-24 20:23:48.193276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.646 [2024-07-24 20:23:48.193313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.646 [2024-07-24 20:23:48.201401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.646 [2024-07-24 20:23:48.201768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.646 [2024-07-24 20:23:48.201806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.646 [2024-07-24 20:23:48.210189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.646 [2024-07-24 20:23:48.210556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.646 [2024-07-24 20:23:48.210595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.646 [2024-07-24 20:23:48.217837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.646 [2024-07-24 20:23:48.218203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.646 [2024-07-24 20:23:48.218242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.646 [2024-07-24 20:23:48.225219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1acf8e0) with pdu=0x2000190fef90 00:29:44.646 [2024-07-24 20:23:48.225595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.646 [2024-07-24 20:23:48.225633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.646 00:29:44.646 Latency(us) 00:29:44.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.646 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:44.646 nvme0n1 : 2.00 3767.21 470.90 0.00 0.00 4236.50 2742.80 11893.57 00:29:44.646 =================================================================================================================== 00:29:44.646 Total : 3767.21 470.90 0.00 0.00 4236.50 2742.80 11893.57 00:29:44.646 0 00:29:44.646 20:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:44.646 20:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:44.646 20:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:44.646 20:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:44.646 | .driver_specific 00:29:44.646 | .nvme_error 00:29:44.646 | .status_code 00:29:44.646 | .command_transient_transport_error' 00:29:44.905 20:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 243 > 0 )) 00:29:44.905 20:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2165604 00:29:44.905 20:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2165604 ']' 00:29:44.905 20:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2165604 00:29:44.905 20:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:44.905 20:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:44.905 20:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2165604 00:29:44.905 20:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:44.905 20:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:44.905 20:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2165604' 00:29:44.905 killing process with pid 2165604 00:29:44.905 20:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2165604 00:29:44.905 Received shutdown signal, test time was about 2.000000 seconds 00:29:44.905 00:29:44.905 Latency(us) 00:29:44.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.905 =================================================================================================================== 00:29:44.905 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:44.905 20:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2165604 00:29:45.471 20:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2163867 00:29:45.471 20:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2163867 ']' 00:29:45.471 20:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2163867 00:29:45.471 20:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:45.471 20:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:45.471 20:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2163867 00:29:45.471 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:45.471 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:45.471 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2163867' 00:29:45.471 killing process with pid 2163867 00:29:45.471 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2163867 00:29:45.471 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2163867 00:29:45.730 00:29:45.730 real 0m18.940s 00:29:45.730 user 0m39.217s 00:29:45.730 sys 0m5.325s 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:45.730 ************************************ 00:29:45.730 END TEST nvmf_digest_error 00:29:45.730 ************************************ 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:45.730 rmmod nvme_tcp 00:29:45.730 rmmod nvme_fabrics 00:29:45.730 rmmod nvme_keyring 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2163867 ']' 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2163867 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 2163867 ']' 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 2163867 00:29:45.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2163867) - No such process 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 2163867 is not found' 00:29:45.730 Process with pid 2163867 is not found 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.730 20:23:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.268 20:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:48.268 00:29:48.268 real 0m43.901s 00:29:48.268 user 1m20.160s 00:29:48.268 sys 0m12.679s 00:29:48.268 20:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:48.268 20:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:48.268 ************************************ 00:29:48.268 END TEST nvmf_digest 00:29:48.268 ************************************ 00:29:48.268 20:23:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:48.268 20:23:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:48.268 20:23:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:48.268 20:23:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:48.268 20:23:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:48.268 20:23:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:48.268 20:23:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.268 ************************************ 00:29:48.268 START TEST nvmf_bdevperf 00:29:48.268 ************************************ 00:29:48.268 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:48.268 * Looking for test storage... 00:29:48.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:48.268 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:48.269 20:23:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:50.807 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:50.807 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.807 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:50.808 Found net devices under 0000:84:00.0: cvl_0_0 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:50.808 Found net devices under 0000:84:00.1: cvl_0_1 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:50.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:50.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:29:50.808 00:29:50.808 --- 10.0.0.2 ping statistics --- 00:29:50.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.808 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:50.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:50.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:29:50.808 00:29:50.808 --- 10.0.0.1 ping statistics --- 00:29:50.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.808 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:50.808 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:51.067 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:51.067 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:51.067 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:51.067 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:51.067 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:51.067 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2168091 00:29:51.067 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:51.067 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2168091 00:29:51.067 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2168091 ']' 00:29:51.067 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.067 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:51.067 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.067 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:51.067 20:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:51.067 [2024-07-24 20:23:54.671426] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:29:51.067 [2024-07-24 20:23:54.671541] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.067 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.067 [2024-07-24 20:23:54.764887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:51.326 [2024-07-24 20:23:54.905320] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.326 [2024-07-24 20:23:54.905388] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.326 [2024-07-24 20:23:54.905408] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.326 [2024-07-24 20:23:54.905424] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.326 [2024-07-24 20:23:54.905451] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.326 [2024-07-24 20:23:54.905560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:51.326 [2024-07-24 20:23:54.905623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:51.326 [2024-07-24 20:23:54.905627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.260 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:52.260 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:29:52.260 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:52.260 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:52.260 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.261 [2024-07-24 20:23:55.731696] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.261 Malloc0 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.261 [2024-07-24 20:23:55.805935] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:52.261 { 00:29:52.261 "params": { 00:29:52.261 "name": "Nvme$subsystem", 00:29:52.261 "trtype": "$TEST_TRANSPORT", 00:29:52.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:52.261 "adrfam": "ipv4", 00:29:52.261 "trsvcid": "$NVMF_PORT", 00:29:52.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:52.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:52.261 "hdgst": ${hdgst:-false}, 00:29:52.261 "ddgst": ${ddgst:-false} 00:29:52.261 }, 00:29:52.261 "method": "bdev_nvme_attach_controller" 00:29:52.261 } 00:29:52.261 EOF 00:29:52.261 )") 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:52.261 20:23:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:52.261 "params": { 00:29:52.261 "name": "Nvme1", 00:29:52.261 "trtype": "tcp", 00:29:52.261 "traddr": "10.0.0.2", 00:29:52.261 "adrfam": "ipv4", 00:29:52.261 "trsvcid": "4420", 00:29:52.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:52.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:52.261 "hdgst": false, 00:29:52.261 "ddgst": false 00:29:52.261 }, 00:29:52.261 "method": "bdev_nvme_attach_controller" 00:29:52.261 }' 00:29:52.261 [2024-07-24 20:23:55.863538] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:29:52.261 [2024-07-24 20:23:55.863634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2168249 ] 00:29:52.261 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.261 [2024-07-24 20:23:55.945488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.519 [2024-07-24 20:23:56.088701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.519 Running I/O for 1 seconds... 00:29:53.894 00:29:53.894 Latency(us) 00:29:53.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.894 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:53.894 Verification LBA range: start 0x0 length 0x4000 00:29:53.894 Nvme1n1 : 1.02 6363.19 24.86 0.00 0.00 20011.77 2269.49 16019.91 00:29:53.894 =================================================================================================================== 00:29:53.894 Total : 6363.19 24.86 0.00 0.00 20011.77 2269.49 16019.91 00:29:53.894 20:23:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2168507 00:29:53.894 20:23:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:53.894 20:23:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:53.894 20:23:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:53.894 20:23:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:53.894 20:23:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:53.894 20:23:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:53.894 20:23:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:53.894 { 00:29:53.894 "params": { 00:29:53.894 "name": "Nvme$subsystem", 00:29:53.894 "trtype": "$TEST_TRANSPORT", 00:29:53.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:53.894 "adrfam": "ipv4", 00:29:53.894 "trsvcid": "$NVMF_PORT", 00:29:53.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:53.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:53.894 "hdgst": ${hdgst:-false}, 00:29:53.894 "ddgst": ${ddgst:-false} 00:29:53.894 }, 00:29:53.894 "method": "bdev_nvme_attach_controller" 00:29:53.894 } 00:29:53.894 EOF 00:29:53.894 )") 00:29:53.894 20:23:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:53.894 20:23:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:53.894 20:23:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:53.894 20:23:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:53.894 "params": { 00:29:53.894 "name": "Nvme1", 00:29:53.894 "trtype": "tcp", 00:29:53.894 "traddr": "10.0.0.2", 00:29:53.894 "adrfam": "ipv4", 00:29:53.894 "trsvcid": "4420", 00:29:53.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:53.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:53.894 "hdgst": false, 00:29:53.894 "ddgst": false 00:29:53.894 }, 00:29:53.894 "method": "bdev_nvme_attach_controller" 00:29:53.894 }' 00:29:53.894 [2024-07-24 20:23:57.670629] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:29:53.894 [2024-07-24 20:23:57.670739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2168507 ] 00:29:54.153 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.153 [2024-07-24 20:23:57.752222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.153 [2024-07-24 20:23:57.892350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.744 Running I/O for 15 seconds... 00:29:57.275 20:24:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2168091 00:29:57.275 20:24:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:57.275 [2024-07-24 20:24:00.636833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.636895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.636937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.636960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.636985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.637006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.637029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.637048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.637071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.637092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.637115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.637146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.637168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:122312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.637188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.637209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.637235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.637256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.637275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.637295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.637314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.637336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.637356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.637378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:122352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.637397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.637419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.637450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.637480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.637500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.637522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.637541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.637563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.637582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.637604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.275 [2024-07-24 20:24:00.637623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.637644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.275 [2024-07-24 20:24:00.637663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.637701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.275 [2024-07-24 20:24:00.637721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.637742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:123112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.275 [2024-07-24 20:24:00.637761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.637800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.275 [2024-07-24 20:24:00.637821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.637844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.275 [2024-07-24 20:24:00.637864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.637885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.275 [2024-07-24 20:24:00.637905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.637926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.275 [2024-07-24 20:24:00.637947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.637969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.275 [2024-07-24 20:24:00.637988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.638010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.275 [2024-07-24 20:24:00.638030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.638052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.275 [2024-07-24 20:24:00.638072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.638093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.275 [2024-07-24 20:24:00.638113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.638135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.275 [2024-07-24 20:24:00.638155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.638177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.275 [2024-07-24 20:24:00.638197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.638247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.275 [2024-07-24 20:24:00.638291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.638331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.275 [2024-07-24 20:24:00.638367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.638405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.275 [2024-07-24 20:24:00.638452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.638500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.275 [2024-07-24 20:24:00.638521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.638541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.275 [2024-07-24 20:24:00.638560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.638581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.275 [2024-07-24 20:24:00.638600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.638622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.275 [2024-07-24 20:24:00.638642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.638662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.275 [2024-07-24 20:24:00.638690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.638711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:123264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.275 [2024-07-24 20:24:00.638757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.638796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.638830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.638868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.638902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.638939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.638974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.639011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:122416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.639045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.639083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.639127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.639168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.639203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.639241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.639275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.639313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.639348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.639386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:122456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.639420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.639482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.639523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.639545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.639563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.639584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.275 [2024-07-24 20:24:00.639603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.275 [2024-07-24 20:24:00.639623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:122488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.639643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.639664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:122496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.639684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.639728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.639763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.639802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:122512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.639837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.639874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.639908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.639955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:122528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.639991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.640029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.640064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.640101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.640136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.640173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.640208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.640246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.640280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.640318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.640351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.640389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.640424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.640479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.640516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.640537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.640556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.640577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.640596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.640616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.640636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.640656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.640675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.640738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.640783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.640822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.640858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.640897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.640932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.640969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.641003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.641040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.641075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.641113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.641147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.641184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.641218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.641255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.641289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.641326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.641360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.641396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.641443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.641498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.641518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.641539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.641558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.641578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.641598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.641624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.641644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.641665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.641684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.641743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.641778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.641817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.641851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.641889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.641923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.641961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.641995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.642034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.642067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.642105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.642139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.642177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.642211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.642248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.642282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.642320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.642353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.642390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.642424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.642488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.642516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.642538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.642557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.642577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.642596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.642617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.642636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.642656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.642675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.642719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.642755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.642793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.642827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.642865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.642899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.642938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.642973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.643011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.643044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.643081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.643116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.643154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.643188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.643230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.643265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.643313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.643349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.643387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:122936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.643422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.643671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.643694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.643715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.643735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.643755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.276 [2024-07-24 20:24:00.643774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.643796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.276 [2024-07-24 20:24:00.643815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.643836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.643855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.643875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:122968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.643895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.643916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.643935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.643957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.643976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.643998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:122992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.644018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.644039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.644058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.644079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:123008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.644103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.644126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:123016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.644146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.644166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.644186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.644207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.644226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.644247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:123040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.644266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.644288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.644307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.644328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.644347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.276 [2024-07-24 20:24:00.644369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.276 [2024-07-24 20:24:00.644388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.277 [2024-07-24 20:24:00.644408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:123072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.277 [2024-07-24 20:24:00.644436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.277 [2024-07-24 20:24:00.644460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19168a0 is same with the state(5) to be set 00:29:57.277 [2024-07-24 20:24:00.644495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.277 [2024-07-24 20:24:00.644512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.277 [2024-07-24 20:24:00.644528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123080 len:8 PRP1 0x0 PRP2 0x0 00:29:57.277 [2024-07-24 20:24:00.644546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.277 [2024-07-24 20:24:00.644629] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19168a0 was disconnected and freed. reset controller. 00:29:57.277 [2024-07-24 20:24:00.644737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.277 [2024-07-24 20:24:00.644767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.277 [2024-07-24 20:24:00.644788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.277 [2024-07-24 20:24:00.644812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.277 [2024-07-24 20:24:00.644833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.277 [2024-07-24 20:24:00.644852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.277 [2024-07-24 20:24:00.644872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.277 [2024-07-24 20:24:00.644890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.277 [2024-07-24 20:24:00.644907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.277 [2024-07-24 20:24:00.651878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.277 [2024-07-24 20:24:00.651930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.277 [2024-07-24 20:24:00.653146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-07-24 20:24:00.653220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.277 [2024-07-24 20:24:00.653261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.277 [2024-07-24 20:24:00.653675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.277 [2024-07-24 20:24:00.654236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.277 [2024-07-24 20:24:00.654289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.277 [2024-07-24 20:24:00.654327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.277 [2024-07-24 20:24:00.661441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.277 [2024-07-24 20:24:00.670367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.277 [2024-07-24 20:24:00.671261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-07-24 20:24:00.671333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.277 [2024-07-24 20:24:00.671373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.277 [2024-07-24 20:24:00.671800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.277 [2024-07-24 20:24:00.672348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.277 [2024-07-24 20:24:00.672399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.277 [2024-07-24 20:24:00.672448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.277 [2024-07-24 20:24:00.679379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.277 [2024-07-24 20:24:00.687699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.277 [2024-07-24 20:24:00.688564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-07-24 20:24:00.688603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.277 [2024-07-24 20:24:00.688624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.277 [2024-07-24 20:24:00.689169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.277 [2024-07-24 20:24:00.689633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.277 [2024-07-24 20:24:00.689662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.277 [2024-07-24 20:24:00.689680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.277 [2024-07-24 20:24:00.696923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.277 [2024-07-24 20:24:00.705685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.277 [2024-07-24 20:24:00.706495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-07-24 20:24:00.706567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.277 [2024-07-24 20:24:00.706607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.277 [2024-07-24 20:24:00.707141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.277 [2024-07-24 20:24:00.707630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.277 [2024-07-24 20:24:00.707660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.277 [2024-07-24 20:24:00.707678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.277 [2024-07-24 20:24:00.714671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.277 [2024-07-24 20:24:00.723350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.277 [2024-07-24 20:24:00.723980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-07-24 20:24:00.724049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.277 [2024-07-24 20:24:00.724088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.277 [2024-07-24 20:24:00.724591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.277 [2024-07-24 20:24:00.725058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.277 [2024-07-24 20:24:00.725112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.277 [2024-07-24 20:24:00.725146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.277 [2024-07-24 20:24:00.732050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.277 [2024-07-24 20:24:00.741272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.277 [2024-07-24 20:24:00.741893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-07-24 20:24:00.741962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.277 [2024-07-24 20:24:00.742000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.277 [2024-07-24 20:24:00.742545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.277 [2024-07-24 20:24:00.742973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.277 [2024-07-24 20:24:00.743027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.277 [2024-07-24 20:24:00.743072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.277 [2024-07-24 20:24:00.750062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.277 [2024-07-24 20:24:00.758645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.277 [2024-07-24 20:24:00.759446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-07-24 20:24:00.759511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.277 [2024-07-24 20:24:00.759533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.277 [2024-07-24 20:24:00.759894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.277 [2024-07-24 20:24:00.760458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.277 [2024-07-24 20:24:00.760512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.277 [2024-07-24 20:24:00.760530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.277 [2024-07-24 20:24:00.767521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.277 [2024-07-24 20:24:00.776505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.277 [2024-07-24 20:24:00.777155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-07-24 20:24:00.777225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.277 [2024-07-24 20:24:00.777263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.277 [2024-07-24 20:24:00.777686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.277 [2024-07-24 20:24:00.778210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.277 [2024-07-24 20:24:00.778262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.277 [2024-07-24 20:24:00.778295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.277 [2024-07-24 20:24:00.785238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.277 [2024-07-24 20:24:00.794198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.277 [2024-07-24 20:24:00.794846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-07-24 20:24:00.794916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.277 [2024-07-24 20:24:00.794955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.277 [2024-07-24 20:24:00.795521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.277 [2024-07-24 20:24:00.795861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.277 [2024-07-24 20:24:00.795898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.277 [2024-07-24 20:24:00.795921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.277 [2024-07-24 20:24:00.802709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.277 [2024-07-24 20:24:00.811858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.277 [2024-07-24 20:24:00.812623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-07-24 20:24:00.812668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.277 [2024-07-24 20:24:00.812690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.277 [2024-07-24 20:24:00.813194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.277 [2024-07-24 20:24:00.813646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.277 [2024-07-24 20:24:00.813676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.277 [2024-07-24 20:24:00.813724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.277 [2024-07-24 20:24:00.820317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.277 [2024-07-24 20:24:00.828139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.277 [2024-07-24 20:24:00.828864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-07-24 20:24:00.828934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.277 [2024-07-24 20:24:00.828973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.277 [2024-07-24 20:24:00.829535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.277 [2024-07-24 20:24:00.829961] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.277 [2024-07-24 20:24:00.830013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.277 [2024-07-24 20:24:00.830046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.277 [2024-07-24 20:24:00.836554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.277 [2024-07-24 20:24:00.845959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.277 [2024-07-24 20:24:00.846669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-07-24 20:24:00.846707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.277 [2024-07-24 20:24:00.846728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.277 [2024-07-24 20:24:00.847276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.277 [2024-07-24 20:24:00.847688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.277 [2024-07-24 20:24:00.847718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.277 [2024-07-24 20:24:00.847735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.277 [2024-07-24 20:24:00.854735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.277 [2024-07-24 20:24:00.863526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.277 [2024-07-24 20:24:00.864183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-07-24 20:24:00.864251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.277 [2024-07-24 20:24:00.864289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.277 [2024-07-24 20:24:00.864704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.277 [2024-07-24 20:24:00.865264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.277 [2024-07-24 20:24:00.865317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.277 [2024-07-24 20:24:00.865350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.277 [2024-07-24 20:24:00.871835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.277 [2024-07-24 20:24:00.879987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.277 [2024-07-24 20:24:00.880509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-07-24 20:24:00.880546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.277 [2024-07-24 20:24:00.880567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.277 [2024-07-24 20:24:00.880846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.277 [2024-07-24 20:24:00.881243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.277 [2024-07-24 20:24:00.881296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.277 [2024-07-24 20:24:00.881328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.277 [2024-07-24 20:24:00.888496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.277 [2024-07-24 20:24:00.896668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.277 [2024-07-24 20:24:00.897329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-07-24 20:24:00.897384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.277 [2024-07-24 20:24:00.897446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.278 [2024-07-24 20:24:00.897813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.278 [2024-07-24 20:24:00.898227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.278 [2024-07-24 20:24:00.898267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.278 [2024-07-24 20:24:00.898292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.278 [2024-07-24 20:24:00.903936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.278 [2024-07-24 20:24:00.912851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.278 [2024-07-24 20:24:00.913488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-24 20:24:00.913528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.278 [2024-07-24 20:24:00.913550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.278 [2024-07-24 20:24:00.913909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.278 [2024-07-24 20:24:00.914295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.278 [2024-07-24 20:24:00.914331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.278 [2024-07-24 20:24:00.914355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.278 [2024-07-24 20:24:00.919993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.278 [2024-07-24 20:24:00.928833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.278 [2024-07-24 20:24:00.929492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-24 20:24:00.929531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.278 [2024-07-24 20:24:00.929553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.278 [2024-07-24 20:24:00.929908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.278 [2024-07-24 20:24:00.930293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.278 [2024-07-24 20:24:00.930329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.278 [2024-07-24 20:24:00.930352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.278 [2024-07-24 20:24:00.935919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.278 [2024-07-24 20:24:00.944979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.278 [2024-07-24 20:24:00.945594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-24 20:24:00.945639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.278 [2024-07-24 20:24:00.945660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.278 [2024-07-24 20:24:00.946042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.278 [2024-07-24 20:24:00.946442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.278 [2024-07-24 20:24:00.946479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.278 [2024-07-24 20:24:00.946515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.278 [2024-07-24 20:24:00.952022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.278 [2024-07-24 20:24:00.960987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.278 [2024-07-24 20:24:00.961605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-24 20:24:00.961643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.278 [2024-07-24 20:24:00.961665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.278 [2024-07-24 20:24:00.962040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.278 [2024-07-24 20:24:00.962426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.278 [2024-07-24 20:24:00.962485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.278 [2024-07-24 20:24:00.962503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.278 [2024-07-24 20:24:00.967968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.278 [2024-07-24 20:24:00.977033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.278 [2024-07-24 20:24:00.977641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-24 20:24:00.977679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.278 [2024-07-24 20:24:00.977708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.278 [2024-07-24 20:24:00.978100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.278 [2024-07-24 20:24:00.978508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.278 [2024-07-24 20:24:00.978537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.278 [2024-07-24 20:24:00.978556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.278 [2024-07-24 20:24:00.984017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.278 [2024-07-24 20:24:00.993117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.278 [2024-07-24 20:24:00.993699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-24 20:24:00.993738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.278 [2024-07-24 20:24:00.993777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.278 [2024-07-24 20:24:00.994156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.278 [2024-07-24 20:24:00.994551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.278 [2024-07-24 20:24:00.994581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.278 [2024-07-24 20:24:00.994599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.278 [2024-07-24 20:24:01.000161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.278 [2024-07-24 20:24:01.009241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.278 [2024-07-24 20:24:01.009808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-24 20:24:01.009858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.278 [2024-07-24 20:24:01.009885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.278 [2024-07-24 20:24:01.010263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.278 [2024-07-24 20:24:01.010640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.278 [2024-07-24 20:24:01.010669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.278 [2024-07-24 20:24:01.010688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.278 [2024-07-24 20:24:01.016195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.278 [2024-07-24 20:24:01.025298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.278 [2024-07-24 20:24:01.025865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-24 20:24:01.025913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.278 [2024-07-24 20:24:01.025940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.278 [2024-07-24 20:24:01.026317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.278 [2024-07-24 20:24:01.026671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.278 [2024-07-24 20:24:01.026708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.278 [2024-07-24 20:24:01.026727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.278 [2024-07-24 20:24:01.032284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.278 [2024-07-24 20:24:01.041386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.278 [2024-07-24 20:24:01.041974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-24 20:24:01.042023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.278 [2024-07-24 20:24:01.042050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.278 [2024-07-24 20:24:01.042426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.278 [2024-07-24 20:24:01.042780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.278 [2024-07-24 20:24:01.042817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.278 [2024-07-24 20:24:01.042841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.278 [2024-07-24 20:24:01.048363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.537 [2024-07-24 20:24:01.057161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.537 [2024-07-24 20:24:01.057773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.537 [2024-07-24 20:24:01.057808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.537 [2024-07-24 20:24:01.057829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.537 [2024-07-24 20:24:01.058108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.537 [2024-07-24 20:24:01.058392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.537 [2024-07-24 20:24:01.058419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.537 [2024-07-24 20:24:01.058448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.537 [2024-07-24 20:24:01.063833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.537 [2024-07-24 20:24:01.073129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.537 [2024-07-24 20:24:01.073678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.537 [2024-07-24 20:24:01.073737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.537 [2024-07-24 20:24:01.073765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.537 [2024-07-24 20:24:01.074143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.537 [2024-07-24 20:24:01.074541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.537 [2024-07-24 20:24:01.074571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.537 [2024-07-24 20:24:01.074589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.537 [2024-07-24 20:24:01.080083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.537 [2024-07-24 20:24:01.089289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.537 [2024-07-24 20:24:01.089928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.537 [2024-07-24 20:24:01.089976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.537 [2024-07-24 20:24:01.090003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.537 [2024-07-24 20:24:01.090379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.537 [2024-07-24 20:24:01.090719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.537 [2024-07-24 20:24:01.090749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.537 [2024-07-24 20:24:01.090767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.537 [2024-07-24 20:24:01.096305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.537 [2024-07-24 20:24:01.105380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.537 [2024-07-24 20:24:01.106063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.537 [2024-07-24 20:24:01.106112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.537 [2024-07-24 20:24:01.106140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.537 [2024-07-24 20:24:01.106532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.537 [2024-07-24 20:24:01.106881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.537 [2024-07-24 20:24:01.106919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.537 [2024-07-24 20:24:01.106943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.537 [2024-07-24 20:24:01.112475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.537 [2024-07-24 20:24:01.121553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.537 [2024-07-24 20:24:01.122162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.537 [2024-07-24 20:24:01.122211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.537 [2024-07-24 20:24:01.122239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.537 [2024-07-24 20:24:01.122609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.537 [2024-07-24 20:24:01.122978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.537 [2024-07-24 20:24:01.123016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.537 [2024-07-24 20:24:01.123039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.537 [2024-07-24 20:24:01.128554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.537 [2024-07-24 20:24:01.137681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.537 [2024-07-24 20:24:01.138381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.537 [2024-07-24 20:24:01.138443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.537 [2024-07-24 20:24:01.138493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.537 [2024-07-24 20:24:01.138872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.537 [2024-07-24 20:24:01.139260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.537 [2024-07-24 20:24:01.139297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.537 [2024-07-24 20:24:01.139320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.537 [2024-07-24 20:24:01.144942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.537 [2024-07-24 20:24:01.153857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.537 [2024-07-24 20:24:01.154539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.537 [2024-07-24 20:24:01.154590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.537 [2024-07-24 20:24:01.154618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.537 [2024-07-24 20:24:01.154997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.537 [2024-07-24 20:24:01.155380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.537 [2024-07-24 20:24:01.155417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.537 [2024-07-24 20:24:01.155462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.537 [2024-07-24 20:24:01.161041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.537 [2024-07-24 20:24:01.169956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.537 [2024-07-24 20:24:01.170693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.537 [2024-07-24 20:24:01.170744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.537 [2024-07-24 20:24:01.170772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.537 [2024-07-24 20:24:01.171150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.537 [2024-07-24 20:24:01.171543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.537 [2024-07-24 20:24:01.171571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.537 [2024-07-24 20:24:01.171589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.537 [2024-07-24 20:24:01.177116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.537 [2024-07-24 20:24:01.185920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.538 [2024-07-24 20:24:01.186534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.538 [2024-07-24 20:24:01.186572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.538 [2024-07-24 20:24:01.186594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.538 [2024-07-24 20:24:01.186967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.538 [2024-07-24 20:24:01.187351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.538 [2024-07-24 20:24:01.187387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.538 [2024-07-24 20:24:01.187420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.538 [2024-07-24 20:24:01.192933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.538 [2024-07-24 20:24:01.202037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.538 [2024-07-24 20:24:01.202632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.538 [2024-07-24 20:24:01.202670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.538 [2024-07-24 20:24:01.202690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.538 [2024-07-24 20:24:01.203081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.538 [2024-07-24 20:24:01.203494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.538 [2024-07-24 20:24:01.203524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.538 [2024-07-24 20:24:01.203542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.538 [2024-07-24 20:24:01.209025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.538 [2024-07-24 20:24:01.218148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.538 [2024-07-24 20:24:01.221547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.538 [2024-07-24 20:24:01.221602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.538 [2024-07-24 20:24:01.221626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.538 [2024-07-24 20:24:01.222017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.538 [2024-07-24 20:24:01.222405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.538 [2024-07-24 20:24:01.222471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.538 [2024-07-24 20:24:01.222492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.538 [2024-07-24 20:24:01.227980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.538 [2024-07-24 20:24:01.234162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.538 [2024-07-24 20:24:01.234729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.538 [2024-07-24 20:24:01.234781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.538 [2024-07-24 20:24:01.234809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.538 [2024-07-24 20:24:01.235188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.538 [2024-07-24 20:24:01.235579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.538 [2024-07-24 20:24:01.235610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.538 [2024-07-24 20:24:01.235628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.538 [2024-07-24 20:24:01.241155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.538 [2024-07-24 20:24:01.250297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.538 [2024-07-24 20:24:01.250980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.538 [2024-07-24 20:24:01.251032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.538 [2024-07-24 20:24:01.251059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.538 [2024-07-24 20:24:01.251454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.538 [2024-07-24 20:24:01.251770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.538 [2024-07-24 20:24:01.251798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.538 [2024-07-24 20:24:01.251832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.538 [2024-07-24 20:24:01.257370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.538 [2024-07-24 20:24:01.266514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.538 [2024-07-24 20:24:01.267090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.538 [2024-07-24 20:24:01.267140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.538 [2024-07-24 20:24:01.267168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.538 [2024-07-24 20:24:01.267573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.538 [2024-07-24 20:24:01.267938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.538 [2024-07-24 20:24:01.267975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.538 [2024-07-24 20:24:01.267999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.538 [2024-07-24 20:24:01.273552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.538 [2024-07-24 20:24:01.282641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.538 [2024-07-24 20:24:01.283236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.538 [2024-07-24 20:24:01.283286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.538 [2024-07-24 20:24:01.283314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.538 [2024-07-24 20:24:01.283665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.538 [2024-07-24 20:24:01.284051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.538 [2024-07-24 20:24:01.284088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.538 [2024-07-24 20:24:01.284111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.538 [2024-07-24 20:24:01.289609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.538 [2024-07-24 20:24:01.298695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.538 [2024-07-24 20:24:01.299300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.538 [2024-07-24 20:24:01.299349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.538 [2024-07-24 20:24:01.299377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.538 [2024-07-24 20:24:01.299716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.538 [2024-07-24 20:24:01.300124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.538 [2024-07-24 20:24:01.300161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.538 [2024-07-24 20:24:01.300185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.538 [2024-07-24 20:24:01.305691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.538 [2024-07-24 20:24:01.314812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.538 [2024-07-24 20:24:01.315488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.538 [2024-07-24 20:24:01.315526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.538 [2024-07-24 20:24:01.315547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.538 [2024-07-24 20:24:01.315907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.538 [2024-07-24 20:24:01.316291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.538 [2024-07-24 20:24:01.316327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.538 [2024-07-24 20:24:01.316351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.798 [2024-07-24 20:24:01.321473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.798 [2024-07-24 20:24:01.330592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.798 [2024-07-24 20:24:01.331252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.798 [2024-07-24 20:24:01.331300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.798 [2024-07-24 20:24:01.331328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.798 [2024-07-24 20:24:01.331699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.798 [2024-07-24 20:24:01.332085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.798 [2024-07-24 20:24:01.332122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.798 [2024-07-24 20:24:01.332145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.798 [2024-07-24 20:24:01.337656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.798 [2024-07-24 20:24:01.346770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.798 [2024-07-24 20:24:01.347374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.798 [2024-07-24 20:24:01.347422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.798 [2024-07-24 20:24:01.347480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.798 [2024-07-24 20:24:01.347803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.798 [2024-07-24 20:24:01.348188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.798 [2024-07-24 20:24:01.348225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.798 [2024-07-24 20:24:01.348248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.798 [2024-07-24 20:24:01.353750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.798 [2024-07-24 20:24:01.362859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.798 [2024-07-24 20:24:01.363478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.798 [2024-07-24 20:24:01.363545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.798 [2024-07-24 20:24:01.363567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.798 [2024-07-24 20:24:01.363927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.798 [2024-07-24 20:24:01.364312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.798 [2024-07-24 20:24:01.364348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.798 [2024-07-24 20:24:01.364371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.798 [2024-07-24 20:24:01.369881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.798 [2024-07-24 20:24:01.379002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.798 [2024-07-24 20:24:01.379662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.798 [2024-07-24 20:24:01.379700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.798 [2024-07-24 20:24:01.379721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.798 [2024-07-24 20:24:01.380116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.798 [2024-07-24 20:24:01.380533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.798 [2024-07-24 20:24:01.380563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.799 [2024-07-24 20:24:01.380581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.799 [2024-07-24 20:24:01.386082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.799 [2024-07-24 20:24:01.395193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.799 [2024-07-24 20:24:01.395834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.799 [2024-07-24 20:24:01.395885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.799 [2024-07-24 20:24:01.395913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.799 [2024-07-24 20:24:01.396357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.799 [2024-07-24 20:24:01.396751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.799 [2024-07-24 20:24:01.396808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.799 [2024-07-24 20:24:01.396831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.799 [2024-07-24 20:24:01.402350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.799 [2024-07-24 20:24:01.411222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.799 [2024-07-24 20:24:01.411820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.799 [2024-07-24 20:24:01.411872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.799 [2024-07-24 20:24:01.411909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.799 [2024-07-24 20:24:01.412289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.799 [2024-07-24 20:24:01.412664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.799 [2024-07-24 20:24:01.412694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.799 [2024-07-24 20:24:01.412713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.799 [2024-07-24 20:24:01.418296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.799 [2024-07-24 20:24:01.427482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.799 [2024-07-24 20:24:01.428108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.799 [2024-07-24 20:24:01.428157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.799 [2024-07-24 20:24:01.428185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.799 [2024-07-24 20:24:01.428568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.799 [2024-07-24 20:24:01.428925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.799 [2024-07-24 20:24:01.428962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.799 [2024-07-24 20:24:01.428985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.799 [2024-07-24 20:24:01.434525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.799 [2024-07-24 20:24:01.443575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.799 [2024-07-24 20:24:01.444209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.799 [2024-07-24 20:24:01.444259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.799 [2024-07-24 20:24:01.444287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.799 [2024-07-24 20:24:01.444694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.799 [2024-07-24 20:24:01.445092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.799 [2024-07-24 20:24:01.445129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.799 [2024-07-24 20:24:01.445152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.799 [2024-07-24 20:24:01.450624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.799 [2024-07-24 20:24:01.459722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.799 [2024-07-24 20:24:01.460384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.799 [2024-07-24 20:24:01.460444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.799 [2024-07-24 20:24:01.460490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.799 [2024-07-24 20:24:01.460827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.799 [2024-07-24 20:24:01.461226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.799 [2024-07-24 20:24:01.461264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.799 [2024-07-24 20:24:01.461287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.799 [2024-07-24 20:24:01.466778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.799 [2024-07-24 20:24:01.475900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.799 [2024-07-24 20:24:01.476536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.799 [2024-07-24 20:24:01.476574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.799 [2024-07-24 20:24:01.476596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.799 [2024-07-24 20:24:01.476970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.799 [2024-07-24 20:24:01.477354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.799 [2024-07-24 20:24:01.477391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.799 [2024-07-24 20:24:01.477414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.799 [2024-07-24 20:24:01.482916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.799 [2024-07-24 20:24:01.492033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.799 [2024-07-24 20:24:01.492715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.799 [2024-07-24 20:24:01.492778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.799 [2024-07-24 20:24:01.492806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.799 [2024-07-24 20:24:01.493184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.799 [2024-07-24 20:24:01.493571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.799 [2024-07-24 20:24:01.493601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.799 [2024-07-24 20:24:01.493619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.799 [2024-07-24 20:24:01.499141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.799 [2024-07-24 20:24:01.507956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.799 [2024-07-24 20:24:01.508605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.799 [2024-07-24 20:24:01.508643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.799 [2024-07-24 20:24:01.508664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.799 [2024-07-24 20:24:01.509055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.799 [2024-07-24 20:24:01.509457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.799 [2024-07-24 20:24:01.509502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.799 [2024-07-24 20:24:01.509520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.799 [2024-07-24 20:24:01.515031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.799 [2024-07-24 20:24:01.524158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.799 [2024-07-24 20:24:01.524805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.799 [2024-07-24 20:24:01.524854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.799 [2024-07-24 20:24:01.524882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.799 [2024-07-24 20:24:01.525260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.799 [2024-07-24 20:24:01.525632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.799 [2024-07-24 20:24:01.525662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.799 [2024-07-24 20:24:01.525680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.799 [2024-07-24 20:24:01.531198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.799 [2024-07-24 20:24:01.540299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.799 [2024-07-24 20:24:01.540869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.799 [2024-07-24 20:24:01.540920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.799 [2024-07-24 20:24:01.540948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.799 [2024-07-24 20:24:01.541325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.799 [2024-07-24 20:24:01.541703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.799 [2024-07-24 20:24:01.541742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.799 [2024-07-24 20:24:01.541766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.800 [2024-07-24 20:24:01.547279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.800 [2024-07-24 20:24:01.556399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.800 [2024-07-24 20:24:01.556971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.800 [2024-07-24 20:24:01.557021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.800 [2024-07-24 20:24:01.557048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.800 [2024-07-24 20:24:01.557425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.800 [2024-07-24 20:24:01.557790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.800 [2024-07-24 20:24:01.557827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.800 [2024-07-24 20:24:01.557850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.800 [2024-07-24 20:24:01.563382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.800 [2024-07-24 20:24:01.572521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.800 [2024-07-24 20:24:01.573097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.800 [2024-07-24 20:24:01.573146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:57.800 [2024-07-24 20:24:01.573182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:57.800 [2024-07-24 20:24:01.573567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:57.800 [2024-07-24 20:24:01.573925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.800 [2024-07-24 20:24:01.573961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.800 [2024-07-24 20:24:01.573984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.800 [2024-07-24 20:24:01.579341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.059 [2024-07-24 20:24:01.588190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.059 [2024-07-24 20:24:01.588801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-24 20:24:01.588849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.059 [2024-07-24 20:24:01.588885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.059 [2024-07-24 20:24:01.589262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.059 [2024-07-24 20:24:01.589632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.059 [2024-07-24 20:24:01.589662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.059 [2024-07-24 20:24:01.589679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.059 [2024-07-24 20:24:01.595225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.059 [2024-07-24 20:24:01.604305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.059 [2024-07-24 20:24:01.604906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-24 20:24:01.604955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.059 [2024-07-24 20:24:01.604982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.059 [2024-07-24 20:24:01.605359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.059 [2024-07-24 20:24:01.605704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.059 [2024-07-24 20:24:01.605751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.059 [2024-07-24 20:24:01.605776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.059 [2024-07-24 20:24:01.611291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.059 [2024-07-24 20:24:01.620385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.060 [2024-07-24 20:24:01.620956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-24 20:24:01.621005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.060 [2024-07-24 20:24:01.621032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.060 [2024-07-24 20:24:01.621410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.060 [2024-07-24 20:24:01.621771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.060 [2024-07-24 20:24:01.621818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.060 [2024-07-24 20:24:01.621843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.060 [2024-07-24 20:24:01.627347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.060 [2024-07-24 20:24:01.636545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.060 [2024-07-24 20:24:01.637124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-24 20:24:01.637173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.060 [2024-07-24 20:24:01.637201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.060 [2024-07-24 20:24:01.637579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.060 [2024-07-24 20:24:01.637944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.060 [2024-07-24 20:24:01.637981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.060 [2024-07-24 20:24:01.638005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.060 [2024-07-24 20:24:01.643542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.060 [2024-07-24 20:24:01.652595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.060 [2024-07-24 20:24:01.653303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-24 20:24:01.653357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.060 [2024-07-24 20:24:01.653396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.060 [2024-07-24 20:24:01.653752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.060 [2024-07-24 20:24:01.654099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.060 [2024-07-24 20:24:01.654139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.060 [2024-07-24 20:24:01.654162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.060 [2024-07-24 20:24:01.659775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.060 [2024-07-24 20:24:01.669004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.060 [2024-07-24 20:24:01.669665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-24 20:24:01.669715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.060 [2024-07-24 20:24:01.669757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.060 [2024-07-24 20:24:01.670137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.060 [2024-07-24 20:24:01.670537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.060 [2024-07-24 20:24:01.670572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.060 [2024-07-24 20:24:01.670591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.060 [2024-07-24 20:24:01.676074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.060 [2024-07-24 20:24:01.685216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.060 [2024-07-24 20:24:01.685830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-24 20:24:01.685880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.060 [2024-07-24 20:24:01.685908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.060 [2024-07-24 20:24:01.686286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.060 [2024-07-24 20:24:01.686650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.060 [2024-07-24 20:24:01.686679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.060 [2024-07-24 20:24:01.686698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.060 [2024-07-24 20:24:01.692215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.060 [2024-07-24 20:24:01.701350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.060 [2024-07-24 20:24:01.701939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-24 20:24:01.701989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.060 [2024-07-24 20:24:01.702017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.060 [2024-07-24 20:24:01.702395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.060 [2024-07-24 20:24:01.702754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.060 [2024-07-24 20:24:01.702792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.060 [2024-07-24 20:24:01.702816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.060 [2024-07-24 20:24:01.708331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.060 [2024-07-24 20:24:01.717418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.060 [2024-07-24 20:24:01.717989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-24 20:24:01.718038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.060 [2024-07-24 20:24:01.718065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.060 [2024-07-24 20:24:01.718470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.060 [2024-07-24 20:24:01.718807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.060 [2024-07-24 20:24:01.718844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.060 [2024-07-24 20:24:01.718867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.060 [2024-07-24 20:24:01.724380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.060 [2024-07-24 20:24:01.733517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.060 [2024-07-24 20:24:01.734154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-24 20:24:01.734204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.060 [2024-07-24 20:24:01.734232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.060 [2024-07-24 20:24:01.734611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.060 [2024-07-24 20:24:01.734998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.060 [2024-07-24 20:24:01.735035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.060 [2024-07-24 20:24:01.735059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.060 [2024-07-24 20:24:01.740578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.060 [2024-07-24 20:24:01.749657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.060 [2024-07-24 20:24:01.750301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-24 20:24:01.750349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.060 [2024-07-24 20:24:01.750376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.060 [2024-07-24 20:24:01.750725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.060 [2024-07-24 20:24:01.751125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.060 [2024-07-24 20:24:01.751162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.060 [2024-07-24 20:24:01.751184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.060 [2024-07-24 20:24:01.756673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.060 [2024-07-24 20:24:01.765797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.060 [2024-07-24 20:24:01.766413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-24 20:24:01.766475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.061 [2024-07-24 20:24:01.766517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.061 [2024-07-24 20:24:01.766861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.061 [2024-07-24 20:24:01.767247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.061 [2024-07-24 20:24:01.767283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.061 [2024-07-24 20:24:01.767306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.061 [2024-07-24 20:24:01.772804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.061 [2024-07-24 20:24:01.781912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.061 [2024-07-24 20:24:01.782532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-24 20:24:01.782570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.061 [2024-07-24 20:24:01.782592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.061 [2024-07-24 20:24:01.782965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.061 [2024-07-24 20:24:01.783351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.061 [2024-07-24 20:24:01.783387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.061 [2024-07-24 20:24:01.783420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.061 [2024-07-24 20:24:01.788931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.061 [2024-07-24 20:24:01.798040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.061 [2024-07-24 20:24:01.798619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-24 20:24:01.798657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.061 [2024-07-24 20:24:01.798679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.061 [2024-07-24 20:24:01.799062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.061 [2024-07-24 20:24:01.799475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.061 [2024-07-24 20:24:01.799504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.061 [2024-07-24 20:24:01.799523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.061 [2024-07-24 20:24:01.805011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.061 [2024-07-24 20:24:01.814106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.061 [2024-07-24 20:24:01.814770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-24 20:24:01.814818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.061 [2024-07-24 20:24:01.814844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.061 [2024-07-24 20:24:01.815222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.061 [2024-07-24 20:24:01.815602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.061 [2024-07-24 20:24:01.815632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.061 [2024-07-24 20:24:01.815650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.061 [2024-07-24 20:24:01.821155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.061 [2024-07-24 20:24:01.830303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.061 [2024-07-24 20:24:01.830899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-24 20:24:01.830949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.061 [2024-07-24 20:24:01.830976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.061 [2024-07-24 20:24:01.831353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.061 [2024-07-24 20:24:01.831703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.061 [2024-07-24 20:24:01.831751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.061 [2024-07-24 20:24:01.831774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.061 [2024-07-24 20:24:01.837287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.322 [2024-07-24 20:24:01.845989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.322 [2024-07-24 20:24:01.846599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.322 [2024-07-24 20:24:01.846643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.322 [2024-07-24 20:24:01.846667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.322 [2024-07-24 20:24:01.847070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.322 [2024-07-24 20:24:01.847470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.322 [2024-07-24 20:24:01.847520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.322 [2024-07-24 20:24:01.847539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.322 [2024-07-24 20:24:01.852707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.322 [2024-07-24 20:24:01.862157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.322 [2024-07-24 20:24:01.862690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.322 [2024-07-24 20:24:01.862750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.322 [2024-07-24 20:24:01.862778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.322 [2024-07-24 20:24:01.863155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.322 [2024-07-24 20:24:01.863552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.322 [2024-07-24 20:24:01.863582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.322 [2024-07-24 20:24:01.863600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.322 [2024-07-24 20:24:01.869110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.322 [2024-07-24 20:24:01.878209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.322 [2024-07-24 20:24:01.878756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.322 [2024-07-24 20:24:01.878804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.322 [2024-07-24 20:24:01.878832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.322 [2024-07-24 20:24:01.879210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.322 [2024-07-24 20:24:01.879593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.322 [2024-07-24 20:24:01.879623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.322 [2024-07-24 20:24:01.879641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.322 [2024-07-24 20:24:01.885109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.322 [2024-07-24 20:24:01.894247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.322 [2024-07-24 20:24:01.894882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.322 [2024-07-24 20:24:01.894931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.322 [2024-07-24 20:24:01.894958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.322 [2024-07-24 20:24:01.895336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.322 [2024-07-24 20:24:01.895695] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.322 [2024-07-24 20:24:01.895743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.322 [2024-07-24 20:24:01.895767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.322 [2024-07-24 20:24:01.901297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.322 [2024-07-24 20:24:01.910392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.322 [2024-07-24 20:24:01.910994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.322 [2024-07-24 20:24:01.911045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.322 [2024-07-24 20:24:01.911073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.322 [2024-07-24 20:24:01.911513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.322 [2024-07-24 20:24:01.911849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.322 [2024-07-24 20:24:01.911885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.322 [2024-07-24 20:24:01.911912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.322 [2024-07-24 20:24:01.917609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.322 [2024-07-24 20:24:01.926520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.322 [2024-07-24 20:24:01.927111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.322 [2024-07-24 20:24:01.927163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.322 [2024-07-24 20:24:01.927191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.322 [2024-07-24 20:24:01.927577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.322 [2024-07-24 20:24:01.927943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.322 [2024-07-24 20:24:01.927980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.323 [2024-07-24 20:24:01.928004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.323 [2024-07-24 20:24:01.933573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.323 [2024-07-24 20:24:01.942711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.323 [2024-07-24 20:24:01.943408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.323 [2024-07-24 20:24:01.943483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.323 [2024-07-24 20:24:01.943508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.323 [2024-07-24 20:24:01.943841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.323 [2024-07-24 20:24:01.944228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.323 [2024-07-24 20:24:01.944278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.323 [2024-07-24 20:24:01.944302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.323 [2024-07-24 20:24:01.949795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.323 [2024-07-24 20:24:01.958911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.323 [2024-07-24 20:24:01.959478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.323 [2024-07-24 20:24:01.959537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.323 [2024-07-24 20:24:01.959559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.323 [2024-07-24 20:24:01.959907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.323 [2024-07-24 20:24:01.960291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.323 [2024-07-24 20:24:01.960328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.323 [2024-07-24 20:24:01.960351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.323 [2024-07-24 20:24:01.965795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.323 [2024-07-24 20:24:01.974921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.323 [2024-07-24 20:24:01.975554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.323 [2024-07-24 20:24:01.975592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.323 [2024-07-24 20:24:01.975613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.323 [2024-07-24 20:24:01.975980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.323 [2024-07-24 20:24:01.976365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.323 [2024-07-24 20:24:01.976402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.323 [2024-07-24 20:24:01.976426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.323 [2024-07-24 20:24:01.981945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.323 [2024-07-24 20:24:01.991062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.323 [2024-07-24 20:24:01.991634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.323 [2024-07-24 20:24:01.991673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.323 [2024-07-24 20:24:01.991694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.323 [2024-07-24 20:24:01.992083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.323 [2024-07-24 20:24:01.992493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.323 [2024-07-24 20:24:01.992523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.323 [2024-07-24 20:24:01.992541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.323 [2024-07-24 20:24:01.998022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.323 [2024-07-24 20:24:02.007175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.323 [2024-07-24 20:24:02.007723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.323 [2024-07-24 20:24:02.007773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.323 [2024-07-24 20:24:02.007810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.323 [2024-07-24 20:24:02.008189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.323 [2024-07-24 20:24:02.008582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.323 [2024-07-24 20:24:02.008612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.323 [2024-07-24 20:24:02.008631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.323 [2024-07-24 20:24:02.014169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.323 [2024-07-24 20:24:02.023473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.323 [2024-07-24 20:24:02.024056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.323 [2024-07-24 20:24:02.024106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.323 [2024-07-24 20:24:02.024134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.323 [2024-07-24 20:24:02.024530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.323 [2024-07-24 20:24:02.024875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.323 [2024-07-24 20:24:02.024912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.323 [2024-07-24 20:24:02.024937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.323 [2024-07-24 20:24:02.030462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.323 [2024-07-24 20:24:02.039576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.323 [2024-07-24 20:24:02.040152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.323 [2024-07-24 20:24:02.040200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.323 [2024-07-24 20:24:02.040228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.323 [2024-07-24 20:24:02.040598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.323 [2024-07-24 20:24:02.040956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.323 [2024-07-24 20:24:02.040993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.323 [2024-07-24 20:24:02.041017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.323 [2024-07-24 20:24:02.046404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.323 [2024-07-24 20:24:02.055534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.323 [2024-07-24 20:24:02.056066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.323 [2024-07-24 20:24:02.056115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.323 [2024-07-24 20:24:02.056143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.323 [2024-07-24 20:24:02.056554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.323 [2024-07-24 20:24:02.056901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.323 [2024-07-24 20:24:02.056948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.323 [2024-07-24 20:24:02.056973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.323 [2024-07-24 20:24:02.062494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.323 [2024-07-24 20:24:02.071639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.323 [2024-07-24 20:24:02.072315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.323 [2024-07-24 20:24:02.072364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.323 [2024-07-24 20:24:02.072391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.323 [2024-07-24 20:24:02.072753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.323 [2024-07-24 20:24:02.073139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.323 [2024-07-24 20:24:02.073175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.324 [2024-07-24 20:24:02.073199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.324 [2024-07-24 20:24:02.078780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.324 [2024-07-24 20:24:02.089536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.324 [2024-07-24 20:24:02.090211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.324 [2024-07-24 20:24:02.090280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.324 [2024-07-24 20:24:02.090318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.324 [2024-07-24 20:24:02.090742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.324 [2024-07-24 20:24:02.091290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.324 [2024-07-24 20:24:02.091342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.324 [2024-07-24 20:24:02.091376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.324 [2024-07-24 20:24:02.098484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.583 [2024-07-24 20:24:02.105379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.583 [2024-07-24 20:24:02.105875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.583 [2024-07-24 20:24:02.105911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.583 [2024-07-24 20:24:02.105931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.583 [2024-07-24 20:24:02.106268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.583 [2024-07-24 20:24:02.106678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.583 [2024-07-24 20:24:02.106707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.583 [2024-07-24 20:24:02.106724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.583 [2024-07-24 20:24:02.112850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.583 [2024-07-24 20:24:02.123064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.583 [2024-07-24 20:24:02.123754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.583 [2024-07-24 20:24:02.123824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.584 [2024-07-24 20:24:02.123863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.584 [2024-07-24 20:24:02.124399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.584 [2024-07-24 20:24:02.124841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.584 [2024-07-24 20:24:02.124895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.584 [2024-07-24 20:24:02.124928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.584 [2024-07-24 20:24:02.131954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.584 [2024-07-24 20:24:02.140606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.584 [2024-07-24 20:24:02.141315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.584 [2024-07-24 20:24:02.141384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.584 [2024-07-24 20:24:02.141422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.584 [2024-07-24 20:24:02.141827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.584 [2024-07-24 20:24:02.142372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.584 [2024-07-24 20:24:02.142423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.584 [2024-07-24 20:24:02.142490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.584 [2024-07-24 20:24:02.149225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.584 [2024-07-24 20:24:02.158227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.584 [2024-07-24 20:24:02.158888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.584 [2024-07-24 20:24:02.158958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.584 [2024-07-24 20:24:02.158996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.584 [2024-07-24 20:24:02.159527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.584 [2024-07-24 20:24:02.159961] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.584 [2024-07-24 20:24:02.160015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.584 [2024-07-24 20:24:02.160047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.584 [2024-07-24 20:24:02.166903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.584 [2024-07-24 20:24:02.175003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.584 [2024-07-24 20:24:02.175690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.584 [2024-07-24 20:24:02.175731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.584 [2024-07-24 20:24:02.175760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.584 [2024-07-24 20:24:02.176266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.584 [2024-07-24 20:24:02.176682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.584 [2024-07-24 20:24:02.176712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.584 [2024-07-24 20:24:02.176731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.584 [2024-07-24 20:24:02.183716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.584 [2024-07-24 20:24:02.192708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.584 [2024-07-24 20:24:02.193494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.584 [2024-07-24 20:24:02.193534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.584 [2024-07-24 20:24:02.193556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.584 [2024-07-24 20:24:02.194023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.584 [2024-07-24 20:24:02.194553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.584 [2024-07-24 20:24:02.194584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.584 [2024-07-24 20:24:02.194602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.584 [2024-07-24 20:24:02.201570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.584 [2024-07-24 20:24:02.208611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.584 [2024-07-24 20:24:02.209110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.584 [2024-07-24 20:24:02.209149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.584 [2024-07-24 20:24:02.209170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.584 [2024-07-24 20:24:02.209475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.584 [2024-07-24 20:24:02.209773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.584 [2024-07-24 20:24:02.209801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.584 [2024-07-24 20:24:02.209819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.584 [2024-07-24 20:24:02.214204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.584 [2024-07-24 20:24:02.223546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.584 [2024-07-24 20:24:02.224048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.584 [2024-07-24 20:24:02.224086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.584 [2024-07-24 20:24:02.224107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.584 [2024-07-24 20:24:02.224399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.584 [2024-07-24 20:24:02.224706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.584 [2024-07-24 20:24:02.224742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.584 [2024-07-24 20:24:02.224761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.584 [2024-07-24 20:24:02.229144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.584 [2024-07-24 20:24:02.238483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.584 [2024-07-24 20:24:02.238980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.584 [2024-07-24 20:24:02.239017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.584 [2024-07-24 20:24:02.239038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.584 [2024-07-24 20:24:02.239330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.584 [2024-07-24 20:24:02.239636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.584 [2024-07-24 20:24:02.239665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.584 [2024-07-24 20:24:02.239684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.584 [2024-07-24 20:24:02.244085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.584 [2024-07-24 20:24:02.253412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.584 [2024-07-24 20:24:02.253910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.584 [2024-07-24 20:24:02.253947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.584 [2024-07-24 20:24:02.253968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.584 [2024-07-24 20:24:02.254260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.584 [2024-07-24 20:24:02.254570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.584 [2024-07-24 20:24:02.254599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.584 [2024-07-24 20:24:02.254617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.584 [2024-07-24 20:24:02.259000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.584 [2024-07-24 20:24:02.271052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.584 [2024-07-24 20:24:02.271791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.584 [2024-07-24 20:24:02.271860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.584 [2024-07-24 20:24:02.271898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.585 [2024-07-24 20:24:02.272455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.585 [2024-07-24 20:24:02.272850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.585 [2024-07-24 20:24:02.272903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.585 [2024-07-24 20:24:02.272936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.585 [2024-07-24 20:24:02.279080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.585 [2024-07-24 20:24:02.288619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.585 [2024-07-24 20:24:02.289116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.585 [2024-07-24 20:24:02.289154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.585 [2024-07-24 20:24:02.289176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.585 [2024-07-24 20:24:02.289480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.585 [2024-07-24 20:24:02.289779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.585 [2024-07-24 20:24:02.289807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.585 [2024-07-24 20:24:02.289825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.585 [2024-07-24 20:24:02.296181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.585 [2024-07-24 20:24:02.306411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.585 [2024-07-24 20:24:02.307090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.585 [2024-07-24 20:24:02.307158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.585 [2024-07-24 20:24:02.307196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.585 [2024-07-24 20:24:02.307648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.585 [2024-07-24 20:24:02.308190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.585 [2024-07-24 20:24:02.308243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.585 [2024-07-24 20:24:02.308275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.585 [2024-07-24 20:24:02.315526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.585 [2024-07-24 20:24:02.324149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.585 [2024-07-24 20:24:02.324859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.585 [2024-07-24 20:24:02.324927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.585 [2024-07-24 20:24:02.324965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.585 [2024-07-24 20:24:02.325525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.585 [2024-07-24 20:24:02.325911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.585 [2024-07-24 20:24:02.325962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.585 [2024-07-24 20:24:02.325996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.585 [2024-07-24 20:24:02.332984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.585 [2024-07-24 20:24:02.343022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.585 [2024-07-24 20:24:02.343820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.585 [2024-07-24 20:24:02.343888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.585 [2024-07-24 20:24:02.343927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.585 [2024-07-24 20:24:02.344498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.585 [2024-07-24 20:24:02.345044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.585 [2024-07-24 20:24:02.345095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.585 [2024-07-24 20:24:02.345128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.585 [2024-07-24 20:24:02.352804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.585 [2024-07-24 20:24:02.360664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.585 [2024-07-24 20:24:02.361462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.585 [2024-07-24 20:24:02.361517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.585 [2024-07-24 20:24:02.361538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.585 [2024-07-24 20:24:02.361975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.585 [2024-07-24 20:24:02.362531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.585 [2024-07-24 20:24:02.362561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.585 [2024-07-24 20:24:02.362579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.845 [2024-07-24 20:24:02.367840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.845 [2024-07-24 20:24:02.377600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.845 [2024-07-24 20:24:02.378335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.845 [2024-07-24 20:24:02.378404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.845 [2024-07-24 20:24:02.378465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.845 [2024-07-24 20:24:02.378889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.845 [2024-07-24 20:24:02.379452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.845 [2024-07-24 20:24:02.379502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.845 [2024-07-24 20:24:02.379519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.845 [2024-07-24 20:24:02.386542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.845 [2024-07-24 20:24:02.395307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.845 [2024-07-24 20:24:02.396093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.845 [2024-07-24 20:24:02.396161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.845 [2024-07-24 20:24:02.396199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.845 [2024-07-24 20:24:02.396658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.845 [2024-07-24 20:24:02.397173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.845 [2024-07-24 20:24:02.397224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.845 [2024-07-24 20:24:02.397271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.845 [2024-07-24 20:24:02.404481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.845 [2024-07-24 20:24:02.413552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.845 [2024-07-24 20:24:02.414177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.845 [2024-07-24 20:24:02.414245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.845 [2024-07-24 20:24:02.414285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.845 [2024-07-24 20:24:02.414702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.845 [2024-07-24 20:24:02.415256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.845 [2024-07-24 20:24:02.415308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.845 [2024-07-24 20:24:02.415340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.845 [2024-07-24 20:24:02.422393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.845 [2024-07-24 20:24:02.430553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.845 [2024-07-24 20:24:02.431196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.845 [2024-07-24 20:24:02.431271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.845 [2024-07-24 20:24:02.431313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.845 [2024-07-24 20:24:02.431743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.845 [2024-07-24 20:24:02.432293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.846 [2024-07-24 20:24:02.432345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.846 [2024-07-24 20:24:02.432378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.846 [2024-07-24 20:24:02.439727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.846 [2024-07-24 20:24:02.447859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.846 [2024-07-24 20:24:02.448761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.846 [2024-07-24 20:24:02.448833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.846 [2024-07-24 20:24:02.448872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.846 [2024-07-24 20:24:02.449409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.846 [2024-07-24 20:24:02.449808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.846 [2024-07-24 20:24:02.449862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.846 [2024-07-24 20:24:02.449895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.846 [2024-07-24 20:24:02.456894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.846 [2024-07-24 20:24:02.465616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.846 [2024-07-24 20:24:02.466369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.846 [2024-07-24 20:24:02.466478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.846 [2024-07-24 20:24:02.466504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.846 [2024-07-24 20:24:02.466926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.846 [2024-07-24 20:24:02.467502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.846 [2024-07-24 20:24:02.467532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.846 [2024-07-24 20:24:02.467550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.846 [2024-07-24 20:24:02.474590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.846 [2024-07-24 20:24:02.483373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.846 [2024-07-24 20:24:02.484080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.846 [2024-07-24 20:24:02.484149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.846 [2024-07-24 20:24:02.484188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.846 [2024-07-24 20:24:02.484648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.846 [2024-07-24 20:24:02.485193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.846 [2024-07-24 20:24:02.485245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.846 [2024-07-24 20:24:02.485279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.846 [2024-07-24 20:24:02.492553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.846 [2024-07-24 20:24:02.501964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.846 [2024-07-24 20:24:02.502870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.846 [2024-07-24 20:24:02.502941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.846 [2024-07-24 20:24:02.502981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.846 [2024-07-24 20:24:02.503540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.846 [2024-07-24 20:24:02.504083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.846 [2024-07-24 20:24:02.504134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.846 [2024-07-24 20:24:02.504167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.846 [2024-07-24 20:24:02.511242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.846 [2024-07-24 20:24:02.520105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.846 [2024-07-24 20:24:02.520992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.846 [2024-07-24 20:24:02.521062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.846 [2024-07-24 20:24:02.521099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.846 [2024-07-24 20:24:02.521656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.846 [2024-07-24 20:24:02.522215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.846 [2024-07-24 20:24:02.522266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.846 [2024-07-24 20:24:02.522299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.846 [2024-07-24 20:24:02.529405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.846 [2024-07-24 20:24:02.537869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.846 [2024-07-24 20:24:02.538737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.846 [2024-07-24 20:24:02.538806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.846 [2024-07-24 20:24:02.538845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.846 [2024-07-24 20:24:02.539379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.846 [2024-07-24 20:24:02.539792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.846 [2024-07-24 20:24:02.539847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.846 [2024-07-24 20:24:02.539880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.846 [2024-07-24 20:24:02.546924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.846 [2024-07-24 20:24:02.556688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.846 [2024-07-24 20:24:02.557553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.846 [2024-07-24 20:24:02.557622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.846 [2024-07-24 20:24:02.557661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.846 [2024-07-24 20:24:02.558194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.846 [2024-07-24 20:24:02.558658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.846 [2024-07-24 20:24:02.558687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.846 [2024-07-24 20:24:02.558706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.846 [2024-07-24 20:24:02.565774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.846 [2024-07-24 20:24:02.574896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.846 [2024-07-24 20:24:02.575765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.846 [2024-07-24 20:24:02.575835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.846 [2024-07-24 20:24:02.575874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.846 [2024-07-24 20:24:02.576408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.846 [2024-07-24 20:24:02.576865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.846 [2024-07-24 20:24:02.576917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.846 [2024-07-24 20:24:02.576950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.846 [2024-07-24 20:24:02.584102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.846 [2024-07-24 20:24:02.592757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.846 [2024-07-24 20:24:02.593635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.846 [2024-07-24 20:24:02.593704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.846 [2024-07-24 20:24:02.593743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.846 [2024-07-24 20:24:02.594277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.846 [2024-07-24 20:24:02.594703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.846 [2024-07-24 20:24:02.594760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.846 [2024-07-24 20:24:02.594794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.847 [2024-07-24 20:24:02.601825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.847 [2024-07-24 20:24:02.610677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.847 [2024-07-24 20:24:02.611493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.847 [2024-07-24 20:24:02.611583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.847 [2024-07-24 20:24:02.611624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.847 [2024-07-24 20:24:02.612161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.847 [2024-07-24 20:24:02.612640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.847 [2024-07-24 20:24:02.612670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.847 [2024-07-24 20:24:02.612687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.847 [2024-07-24 20:24:02.619738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.847 [2024-07-24 20:24:02.627359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.847 [2024-07-24 20:24:02.627977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.847 [2024-07-24 20:24:02.628014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:58.847 [2024-07-24 20:24:02.628034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:58.847 [2024-07-24 20:24:02.628548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:58.847 [2024-07-24 20:24:02.628987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.847 [2024-07-24 20:24:02.629015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.847 [2024-07-24 20:24:02.629031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.106 [2024-07-24 20:24:02.634696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.106 [2024-07-24 20:24:02.645914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.106 [2024-07-24 20:24:02.646773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.106 [2024-07-24 20:24:02.646842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.106 [2024-07-24 20:24:02.646893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.106 [2024-07-24 20:24:02.647449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.106 [2024-07-24 20:24:02.647830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.106 [2024-07-24 20:24:02.647882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.106 [2024-07-24 20:24:02.647914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.106 [2024-07-24 20:24:02.654956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.106 [2024-07-24 20:24:02.664123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.106 [2024-07-24 20:24:02.664935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.106 [2024-07-24 20:24:02.665006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.106 [2024-07-24 20:24:02.665045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.106 [2024-07-24 20:24:02.665582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.106 [2024-07-24 20:24:02.666030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.106 [2024-07-24 20:24:02.666082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.106 [2024-07-24 20:24:02.666115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.106 [2024-07-24 20:24:02.673247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.106 [2024-07-24 20:24:02.680916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.106 [2024-07-24 20:24:02.681784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.106 [2024-07-24 20:24:02.681858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.106 [2024-07-24 20:24:02.681898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.106 [2024-07-24 20:24:02.682458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.107 [2024-07-24 20:24:02.682874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.107 [2024-07-24 20:24:02.682927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.107 [2024-07-24 20:24:02.682960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.107 [2024-07-24 20:24:02.690466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.107 [2024-07-24 20:24:02.698656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.107 [2024-07-24 20:24:02.699483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.107 [2024-07-24 20:24:02.699523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.107 [2024-07-24 20:24:02.699544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.107 [2024-07-24 20:24:02.699981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.107 [2024-07-24 20:24:02.700547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.107 [2024-07-24 20:24:02.700584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.107 [2024-07-24 20:24:02.700604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.107 [2024-07-24 20:24:02.707788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.107 [2024-07-24 20:24:02.717334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.107 [2024-07-24 20:24:02.718265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.107 [2024-07-24 20:24:02.718335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.107 [2024-07-24 20:24:02.718374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.107 [2024-07-24 20:24:02.718797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.107 [2024-07-24 20:24:02.719344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.107 [2024-07-24 20:24:02.719395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.107 [2024-07-24 20:24:02.719443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.107 [2024-07-24 20:24:02.726350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.107 [2024-07-24 20:24:02.736143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.107 [2024-07-24 20:24:02.737041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.107 [2024-07-24 20:24:02.737111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.107 [2024-07-24 20:24:02.737150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.107 [2024-07-24 20:24:02.737637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.107 [2024-07-24 20:24:02.737960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.107 [2024-07-24 20:24:02.738012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.107 [2024-07-24 20:24:02.738045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.107 [2024-07-24 20:24:02.744927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.107 [2024-07-24 20:24:02.754282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.107 [2024-07-24 20:24:02.755018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.107 [2024-07-24 20:24:02.755088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.107 [2024-07-24 20:24:02.755126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.107 [2024-07-24 20:24:02.755613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.107 [2024-07-24 20:24:02.756085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.107 [2024-07-24 20:24:02.756137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.107 [2024-07-24 20:24:02.756169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.107 [2024-07-24 20:24:02.763281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.107 [2024-07-24 20:24:02.772384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.107 [2024-07-24 20:24:02.773250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.107 [2024-07-24 20:24:02.773321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.107 [2024-07-24 20:24:02.773359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.107 [2024-07-24 20:24:02.773805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.107 [2024-07-24 20:24:02.774353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.107 [2024-07-24 20:24:02.774405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.107 [2024-07-24 20:24:02.774454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.107 [2024-07-24 20:24:02.781479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.107 [2024-07-24 20:24:02.790627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.107 [2024-07-24 20:24:02.791527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.107 [2024-07-24 20:24:02.791597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.107 [2024-07-24 20:24:02.791635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.107 [2024-07-24 20:24:02.792170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.107 [2024-07-24 20:24:02.792646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.107 [2024-07-24 20:24:02.792676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.107 [2024-07-24 20:24:02.792695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.107 [2024-07-24 20:24:02.799759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.107 [2024-07-24 20:24:02.808863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.107 [2024-07-24 20:24:02.809791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.107 [2024-07-24 20:24:02.809862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.107 [2024-07-24 20:24:02.809901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.107 [2024-07-24 20:24:02.810460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.107 [2024-07-24 20:24:02.810857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.107 [2024-07-24 20:24:02.810909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.107 [2024-07-24 20:24:02.810941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.107 [2024-07-24 20:24:02.818013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.107 [2024-07-24 20:24:02.826684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.107 [2024-07-24 20:24:02.827535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.107 [2024-07-24 20:24:02.827574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.107 [2024-07-24 20:24:02.827595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.107 [2024-07-24 20:24:02.828095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.107 [2024-07-24 20:24:02.828605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.107 [2024-07-24 20:24:02.828635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.107 [2024-07-24 20:24:02.828653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.107 [2024-07-24 20:24:02.835663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.107 [2024-07-24 20:24:02.844459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.107 [2024-07-24 20:24:02.845203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.107 [2024-07-24 20:24:02.845273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.107 [2024-07-24 20:24:02.845313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.107 [2024-07-24 20:24:02.845704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.107 [2024-07-24 20:24:02.846112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.108 [2024-07-24 20:24:02.846164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.108 [2024-07-24 20:24:02.846196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.108 [2024-07-24 20:24:02.853115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.108 [2024-07-24 20:24:02.859792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.108 [2024-07-24 20:24:02.860395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.108 [2024-07-24 20:24:02.860458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.108 [2024-07-24 20:24:02.860502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.108 [2024-07-24 20:24:02.860843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.108 [2024-07-24 20:24:02.861227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.108 [2024-07-24 20:24:02.861265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.108 [2024-07-24 20:24:02.861288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.108 [2024-07-24 20:24:02.868189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.108 [2024-07-24 20:24:02.877592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.108 [2024-07-24 20:24:02.878418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.108 [2024-07-24 20:24:02.878502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.108 [2024-07-24 20:24:02.878523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.108 [2024-07-24 20:24:02.878932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.108 [2024-07-24 20:24:02.879499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.108 [2024-07-24 20:24:02.879546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.108 [2024-07-24 20:24:02.879571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.108 [2024-07-24 20:24:02.886518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.368 [2024-07-24 20:24:02.893851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.368 [2024-07-24 20:24:02.894659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.368 [2024-07-24 20:24:02.894702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.368 [2024-07-24 20:24:02.894721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.368 [2024-07-24 20:24:02.895203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.368 [2024-07-24 20:24:02.895500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.368 [2024-07-24 20:24:02.895528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.368 [2024-07-24 20:24:02.895545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.368 [2024-07-24 20:24:02.902291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.368 [2024-07-24 20:24:02.911443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.368 [2024-07-24 20:24:02.912102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.368 [2024-07-24 20:24:02.912170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.368 [2024-07-24 20:24:02.912208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.368 [2024-07-24 20:24:02.912646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.368 [2024-07-24 20:24:02.913156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.368 [2024-07-24 20:24:02.913207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.368 [2024-07-24 20:24:02.913240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.368 [2024-07-24 20:24:02.920163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.368 [2024-07-24 20:24:02.929264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.368 [2024-07-24 20:24:02.929984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.368 [2024-07-24 20:24:02.930058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.368 [2024-07-24 20:24:02.930098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.368 [2024-07-24 20:24:02.930615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.368 [2024-07-24 20:24:02.931140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.368 [2024-07-24 20:24:02.931197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.368 [2024-07-24 20:24:02.931231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.368 [2024-07-24 20:24:02.937546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.368 [2024-07-24 20:24:02.947005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.368 [2024-07-24 20:24:02.947868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.368 [2024-07-24 20:24:02.947939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.368 [2024-07-24 20:24:02.947978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.368 [2024-07-24 20:24:02.948536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.368 [2024-07-24 20:24:02.948939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.368 [2024-07-24 20:24:02.948990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.368 [2024-07-24 20:24:02.949024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.368 [2024-07-24 20:24:02.956037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.369 [2024-07-24 20:24:02.964735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.369 [2024-07-24 20:24:02.965569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.369 [2024-07-24 20:24:02.965608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.369 [2024-07-24 20:24:02.965630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.369 [2024-07-24 20:24:02.966125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.369 [2024-07-24 20:24:02.966618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.369 [2024-07-24 20:24:02.966647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.369 [2024-07-24 20:24:02.966665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.369 [2024-07-24 20:24:02.974529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.369 [2024-07-24 20:24:02.983260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.369 [2024-07-24 20:24:02.984011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.369 [2024-07-24 20:24:02.984081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.369 [2024-07-24 20:24:02.984120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.369 [2024-07-24 20:24:02.984614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.369 [2024-07-24 20:24:02.985093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.369 [2024-07-24 20:24:02.985145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.369 [2024-07-24 20:24:02.985177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.369 [2024-07-24 20:24:02.992182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.369 [2024-07-24 20:24:03.001314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.369 [2024-07-24 20:24:03.002035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.369 [2024-07-24 20:24:03.002106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.369 [2024-07-24 20:24:03.002145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.369 [2024-07-24 20:24:03.002633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.369 [2024-07-24 20:24:03.003127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.369 [2024-07-24 20:24:03.003180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.369 [2024-07-24 20:24:03.003212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.369 [2024-07-24 20:24:03.010238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.369 [2024-07-24 20:24:03.019362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.369 [2024-07-24 20:24:03.020255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.369 [2024-07-24 20:24:03.020323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.369 [2024-07-24 20:24:03.020362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.369 [2024-07-24 20:24:03.020744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.369 [2024-07-24 20:24:03.021305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.369 [2024-07-24 20:24:03.021357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.369 [2024-07-24 20:24:03.021389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.369 [2024-07-24 20:24:03.028477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.369 [2024-07-24 20:24:03.037580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.369 [2024-07-24 20:24:03.038459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.369 [2024-07-24 20:24:03.038525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.369 [2024-07-24 20:24:03.038547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.369 [2024-07-24 20:24:03.038946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.369 [2024-07-24 20:24:03.039519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.369 [2024-07-24 20:24:03.039549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.369 [2024-07-24 20:24:03.039566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.369 [2024-07-24 20:24:03.046572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.369 [2024-07-24 20:24:03.055733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.369 [2024-07-24 20:24:03.056569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.369 [2024-07-24 20:24:03.056639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.369 [2024-07-24 20:24:03.056677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.369 [2024-07-24 20:24:03.057213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.369 [2024-07-24 20:24:03.057671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.369 [2024-07-24 20:24:03.057701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.369 [2024-07-24 20:24:03.057747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.369 [2024-07-24 20:24:03.064797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.369 [2024-07-24 20:24:03.073884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.369 [2024-07-24 20:24:03.074774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.369 [2024-07-24 20:24:03.074847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.369 [2024-07-24 20:24:03.074885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.369 [2024-07-24 20:24:03.075463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.369 [2024-07-24 20:24:03.075874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.369 [2024-07-24 20:24:03.075926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.369 [2024-07-24 20:24:03.075958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.369 [2024-07-24 20:24:03.083028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.369 [2024-07-24 20:24:03.091707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.369 [2024-07-24 20:24:03.092523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.369 [2024-07-24 20:24:03.092562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.369 [2024-07-24 20:24:03.092583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.369 [2024-07-24 20:24:03.093057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.369 [2024-07-24 20:24:03.093570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.369 [2024-07-24 20:24:03.093600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.369 [2024-07-24 20:24:03.093619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.369 [2024-07-24 20:24:03.100611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.369 [2024-07-24 20:24:03.109305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.369 [2024-07-24 20:24:03.109974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.369 [2024-07-24 20:24:03.110043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.369 [2024-07-24 20:24:03.110081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.369 [2024-07-24 20:24:03.110573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.369 [2024-07-24 20:24:03.111018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.369 [2024-07-24 20:24:03.111071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.369 [2024-07-24 20:24:03.111105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.369 [2024-07-24 20:24:03.118088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.369 [2024-07-24 20:24:03.127143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.369 [2024-07-24 20:24:03.127883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.370 [2024-07-24 20:24:03.127964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.370 [2024-07-24 20:24:03.128006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.370 [2024-07-24 20:24:03.128534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.370 [2024-07-24 20:24:03.128967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.370 [2024-07-24 20:24:03.129020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.370 [2024-07-24 20:24:03.129053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.370 [2024-07-24 20:24:03.135765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.370 [2024-07-24 20:24:03.144716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.370 [2024-07-24 20:24:03.145502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.370 [2024-07-24 20:24:03.145540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.370 [2024-07-24 20:24:03.145561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.370 [2024-07-24 20:24:03.146030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.370 [2024-07-24 20:24:03.146419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.370 [2024-07-24 20:24:03.146494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.370 [2024-07-24 20:24:03.146513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.370 [2024-07-24 20:24:03.152325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.629 [2024-07-24 20:24:03.161608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.629 [2024-07-24 20:24:03.162308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.629 [2024-07-24 20:24:03.162377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.629 [2024-07-24 20:24:03.162415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.629 [2024-07-24 20:24:03.162794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.629 [2024-07-24 20:24:03.163346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.629 [2024-07-24 20:24:03.163396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.629 [2024-07-24 20:24:03.163445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.630 [2024-07-24 20:24:03.170394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.630 [2024-07-24 20:24:03.179385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.630 [2024-07-24 20:24:03.180121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.630 [2024-07-24 20:24:03.180194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.630 [2024-07-24 20:24:03.180234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.630 [2024-07-24 20:24:03.180687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.630 [2024-07-24 20:24:03.181011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.630 [2024-07-24 20:24:03.181077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.630 [2024-07-24 20:24:03.181112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.630 [2024-07-24 20:24:03.187095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.630 [2024-07-24 20:24:03.197208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.630 [2024-07-24 20:24:03.197922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.630 [2024-07-24 20:24:03.197994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.630 [2024-07-24 20:24:03.198035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.630 [2024-07-24 20:24:03.198566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.630 [2024-07-24 20:24:03.199017] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.630 [2024-07-24 20:24:03.199070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.630 [2024-07-24 20:24:03.199103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.630 [2024-07-24 20:24:03.206031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.630 [2024-07-24 20:24:03.214618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.630 [2024-07-24 20:24:03.215343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.630 [2024-07-24 20:24:03.215412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.630 [2024-07-24 20:24:03.215474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.630 [2024-07-24 20:24:03.216015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.630 [2024-07-24 20:24:03.216562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.630 [2024-07-24 20:24:03.216592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.630 [2024-07-24 20:24:03.216610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.630 [2024-07-24 20:24:03.223624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.630 [2024-07-24 20:24:03.232245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.630 [2024-07-24 20:24:03.232940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.630 [2024-07-24 20:24:03.233010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.630 [2024-07-24 20:24:03.233048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.630 [2024-07-24 20:24:03.233559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.630 [2024-07-24 20:24:03.233980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.630 [2024-07-24 20:24:03.234033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.630 [2024-07-24 20:24:03.234065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.630 [2024-07-24 20:24:03.241013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.630 [2024-07-24 20:24:03.250058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.630 [2024-07-24 20:24:03.250749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.630 [2024-07-24 20:24:03.250818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.630 [2024-07-24 20:24:03.250856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.630 [2024-07-24 20:24:03.251390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.630 [2024-07-24 20:24:03.251821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.630 [2024-07-24 20:24:03.251877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.630 [2024-07-24 20:24:03.251910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.630 [2024-07-24 20:24:03.258861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.630 [2024-07-24 20:24:03.267558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.630 [2024-07-24 20:24:03.268331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.630 [2024-07-24 20:24:03.268399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.630 [2024-07-24 20:24:03.268456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.630 [2024-07-24 20:24:03.268857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.630 [2024-07-24 20:24:03.269401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.630 [2024-07-24 20:24:03.269490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.630 [2024-07-24 20:24:03.269509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.630 [2024-07-24 20:24:03.276501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.630 [2024-07-24 20:24:03.285041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.630 [2024-07-24 20:24:03.285931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.631 [2024-07-24 20:24:03.286029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.631 [2024-07-24 20:24:03.286074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.631 [2024-07-24 20:24:03.286582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.631 [2024-07-24 20:24:03.287025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.631 [2024-07-24 20:24:03.287078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.631 [2024-07-24 20:24:03.287112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.631 [2024-07-24 20:24:03.293642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.631 [2024-07-24 20:24:03.300024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.631 [2024-07-24 20:24:03.300581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.631 [2024-07-24 20:24:03.300623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.631 [2024-07-24 20:24:03.300654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.631 [2024-07-24 20:24:03.300948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.631 [2024-07-24 20:24:03.301245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.631 [2024-07-24 20:24:03.301273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.631 [2024-07-24 20:24:03.301291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.631 [2024-07-24 20:24:03.305691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.631 [2024-07-24 20:24:03.314769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.631 [2024-07-24 20:24:03.315313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.631 [2024-07-24 20:24:03.315352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.631 [2024-07-24 20:24:03.315373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.631 [2024-07-24 20:24:03.315676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.631 [2024-07-24 20:24:03.315974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.631 [2024-07-24 20:24:03.316002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.631 [2024-07-24 20:24:03.316020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.631 [2024-07-24 20:24:03.320407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.631 [2024-07-24 20:24:03.329486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.631 [2024-07-24 20:24:03.330024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.631 [2024-07-24 20:24:03.330063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.631 [2024-07-24 20:24:03.330083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.631 [2024-07-24 20:24:03.330375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.631 [2024-07-24 20:24:03.330680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.631 [2024-07-24 20:24:03.330710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.631 [2024-07-24 20:24:03.330727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.631 [2024-07-24 20:24:03.335116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.631 [2024-07-24 20:24:03.344488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.631 [2024-07-24 20:24:03.345202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.631 [2024-07-24 20:24:03.345273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.631 [2024-07-24 20:24:03.345313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.631 [2024-07-24 20:24:03.345764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.631 [2024-07-24 20:24:03.346283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.631 [2024-07-24 20:24:03.346322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.631 [2024-07-24 20:24:03.346341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.631 [2024-07-24 20:24:03.353552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.631 [2024-07-24 20:24:03.362136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.631 [2024-07-24 20:24:03.362898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.631 [2024-07-24 20:24:03.362967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.631 [2024-07-24 20:24:03.363006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.631 [2024-07-24 20:24:03.363497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.631 [2024-07-24 20:24:03.363916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.631 [2024-07-24 20:24:03.363970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.631 [2024-07-24 20:24:03.364002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.631 [2024-07-24 20:24:03.371672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.631 [2024-07-24 20:24:03.380757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.631 [2024-07-24 20:24:03.381327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.631 [2024-07-24 20:24:03.381365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.631 [2024-07-24 20:24:03.381386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.632 [2024-07-24 20:24:03.381690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.632 [2024-07-24 20:24:03.382227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.632 [2024-07-24 20:24:03.382279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.632 [2024-07-24 20:24:03.382312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.632 [2024-07-24 20:24:03.389393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.632 [2024-07-24 20:24:03.398795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.632 [2024-07-24 20:24:03.399694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.632 [2024-07-24 20:24:03.399756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.632 [2024-07-24 20:24:03.399796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.632 [2024-07-24 20:24:03.400334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.632 [2024-07-24 20:24:03.400748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.632 [2024-07-24 20:24:03.400804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.632 [2024-07-24 20:24:03.400838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.632 [2024-07-24 20:24:03.407865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.891 [2024-07-24 20:24:03.414917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.891 [2024-07-24 20:24:03.415555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.891 [2024-07-24 20:24:03.415599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.891 [2024-07-24 20:24:03.415620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.891 [2024-07-24 20:24:03.416106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.891 [2024-07-24 20:24:03.416591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.891 [2024-07-24 20:24:03.416620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.891 [2024-07-24 20:24:03.416638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.891 [2024-07-24 20:24:03.422759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.891 [2024-07-24 20:24:03.432693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.891 [2024-07-24 20:24:03.433541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.891 [2024-07-24 20:24:03.433581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.891 [2024-07-24 20:24:03.433603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.891 [2024-07-24 20:24:03.434115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.891 [2024-07-24 20:24:03.434614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.891 [2024-07-24 20:24:03.434646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.891 [2024-07-24 20:24:03.434665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.891 [2024-07-24 20:24:03.440871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.891 [2024-07-24 20:24:03.450238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.891 [2024-07-24 20:24:03.451030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.891 [2024-07-24 20:24:03.451104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.891 [2024-07-24 20:24:03.451144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.891 [2024-07-24 20:24:03.451628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.891 [2024-07-24 20:24:03.452120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.891 [2024-07-24 20:24:03.452172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.891 [2024-07-24 20:24:03.452205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.891 [2024-07-24 20:24:03.459212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.891 [2024-07-24 20:24:03.467817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.891 [2024-07-24 20:24:03.468608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.891 [2024-07-24 20:24:03.468646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.891 [2024-07-24 20:24:03.468667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.891 [2024-07-24 20:24:03.469199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.891 [2024-07-24 20:24:03.469660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.891 [2024-07-24 20:24:03.469691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.891 [2024-07-24 20:24:03.469710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.891 [2024-07-24 20:24:03.476675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.891 [2024-07-24 20:24:03.485419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.891 [2024-07-24 20:24:03.486100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.891 [2024-07-24 20:24:03.486169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.891 [2024-07-24 20:24:03.486207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.891 [2024-07-24 20:24:03.486663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.891 [2024-07-24 20:24:03.487155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.891 [2024-07-24 20:24:03.487207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.891 [2024-07-24 20:24:03.487240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.891 [2024-07-24 20:24:03.494237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.891 [2024-07-24 20:24:03.502945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.891 [2024-07-24 20:24:03.503829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.891 [2024-07-24 20:24:03.503899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.891 [2024-07-24 20:24:03.503939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.891 [2024-07-24 20:24:03.504507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.891 [2024-07-24 20:24:03.504928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.891 [2024-07-24 20:24:03.504980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.891 [2024-07-24 20:24:03.505013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.891 [2024-07-24 20:24:03.512171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.891 [2024-07-24 20:24:03.521941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.891 [2024-07-24 20:24:03.522751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.891 [2024-07-24 20:24:03.522822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.891 [2024-07-24 20:24:03.522860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.891 [2024-07-24 20:24:03.523396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.891 [2024-07-24 20:24:03.523837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.891 [2024-07-24 20:24:03.523890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.891 [2024-07-24 20:24:03.523936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.891 [2024-07-24 20:24:03.530326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.891 [2024-07-24 20:24:03.539654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.891 [2024-07-24 20:24:03.540504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.891 [2024-07-24 20:24:03.540543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.891 [2024-07-24 20:24:03.540564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.891 [2024-07-24 20:24:03.540997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.891 [2024-07-24 20:24:03.541541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.891 [2024-07-24 20:24:03.541571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.891 [2024-07-24 20:24:03.541589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.891 [2024-07-24 20:24:03.548724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.891 [2024-07-24 20:24:03.557490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.891 [2024-07-24 20:24:03.558229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.891 [2024-07-24 20:24:03.558298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.891 [2024-07-24 20:24:03.558336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.891 [2024-07-24 20:24:03.558768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.891 [2024-07-24 20:24:03.559315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.891 [2024-07-24 20:24:03.559367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.891 [2024-07-24 20:24:03.559401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.891 [2024-07-24 20:24:03.566508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.891 [2024-07-24 20:24:03.576395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.891 [2024-07-24 20:24:03.577273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.891 [2024-07-24 20:24:03.577342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.891 [2024-07-24 20:24:03.577380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.891 [2024-07-24 20:24:03.577773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.891 [2024-07-24 20:24:03.578321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.891 [2024-07-24 20:24:03.578372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.891 [2024-07-24 20:24:03.578404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.891 [2024-07-24 20:24:03.585532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.891 [2024-07-24 20:24:03.594204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.891 [2024-07-24 20:24:03.594921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.891 [2024-07-24 20:24:03.594991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.891 [2024-07-24 20:24:03.595030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.891 [2024-07-24 20:24:03.595550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.891 [2024-07-24 20:24:03.595959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.891 [2024-07-24 20:24:03.596011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.891 [2024-07-24 20:24:03.596044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.891 [2024-07-24 20:24:03.603019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.891 [2024-07-24 20:24:03.611681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.891 [2024-07-24 20:24:03.612515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.891 [2024-07-24 20:24:03.612552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.891 [2024-07-24 20:24:03.612573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.891 [2024-07-24 20:24:03.613039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.891 [2024-07-24 20:24:03.613577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.891 [2024-07-24 20:24:03.613606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.891 [2024-07-24 20:24:03.613624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.891 [2024-07-24 20:24:03.619462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2168091 Killed "${NVMF_APP[@]}" "$@" 00:29:59.891 20:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:59.891 [2024-07-24 20:24:03.628976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.891 20:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:59.891 20:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:59.891 20:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:59.891 20:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:59.891 [2024-07-24 20:24:03.629554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.891 [2024-07-24 20:24:03.629592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.891 [2024-07-24 20:24:03.629614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.892 [2024-07-24 20:24:03.629932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.892 [2024-07-24 20:24:03.630243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.892 [2024-07-24 20:24:03.630291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.892 [2024-07-24 20:24:03.630326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.892 20:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2169287 00:29:59.892 20:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:59.892 20:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2169287 00:29:59.892 20:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2169287 ']' 00:29:59.892 20:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.892 20:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:59.892 20:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.892 20:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:59.892 20:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:59.892 [2024-07-24 20:24:03.637348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.892 [2024-07-24 20:24:03.646797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.892 [2024-07-24 20:24:03.647600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.892 [2024-07-24 20:24:03.647638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.892 [2024-07-24 20:24:03.647659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.892 [2024-07-24 20:24:03.648117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.892 [2024-07-24 20:24:03.648598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.892 [2024-07-24 20:24:03.648628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.892 [2024-07-24 20:24:03.648648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.892 [2024-07-24 20:24:03.655694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.892 [2024-07-24 20:24:03.664479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.892 [2024-07-24 20:24:03.665132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.892 [2024-07-24 20:24:03.665201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:29:59.892 [2024-07-24 20:24:03.665239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:29:59.892 [2024-07-24 20:24:03.665664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:29:59.892 [2024-07-24 20:24:03.666167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.892 [2024-07-24 20:24:03.666218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.892 [2024-07-24 20:24:03.666250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.892 [2024-07-24 20:24:03.672505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.151 [2024-07-24 20:24:03.680253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.151 [2024-07-24 20:24:03.680885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.151 [2024-07-24 20:24:03.680954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.151 [2024-07-24 20:24:03.680992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.151 [2024-07-24 20:24:03.681531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.151 [2024-07-24 20:24:03.681844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.151 [2024-07-24 20:24:03.681896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.151 [2024-07-24 20:24:03.681929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.151 [2024-07-24 20:24:03.688924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.151 [2024-07-24 20:24:03.696498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.151 [2024-07-24 20:24:03.697177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.151 [2024-07-24 20:24:03.697250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.151 [2024-07-24 20:24:03.697289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.151 [2024-07-24 20:24:03.697727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.151 [2024-07-24 20:24:03.698278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.151 [2024-07-24 20:24:03.698331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.151 [2024-07-24 20:24:03.698364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.151 [2024-07-24 20:24:03.701149] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:30:00.151 [2024-07-24 20:24:03.701251] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.151 [2024-07-24 20:24:03.705511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.151 [2024-07-24 20:24:03.714549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.151 [2024-07-24 20:24:03.715228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.151 [2024-07-24 20:24:03.715298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.151 [2024-07-24 20:24:03.715337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.151 [2024-07-24 20:24:03.715722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.151 [2024-07-24 20:24:03.716285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.151 [2024-07-24 20:24:03.716337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.151 [2024-07-24 20:24:03.716372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.151 [2024-07-24 20:24:03.723397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.151 [2024-07-24 20:24:03.732083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.151 [2024-07-24 20:24:03.732792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.151 [2024-07-24 20:24:03.732862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.151 [2024-07-24 20:24:03.732901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.151 [2024-07-24 20:24:03.733458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.151 [2024-07-24 20:24:03.733884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.151 [2024-07-24 20:24:03.733938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.151 [2024-07-24 20:24:03.733971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.151 [2024-07-24 20:24:03.740931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.151 [2024-07-24 20:24:03.749557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.151 [2024-07-24 20:24:03.750228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.151 [2024-07-24 20:24:03.750297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.151 [2024-07-24 20:24:03.750335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.151 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.151 [2024-07-24 20:24:03.750711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.151 [2024-07-24 20:24:03.751244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.151 [2024-07-24 20:24:03.751297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.151 [2024-07-24 20:24:03.751330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.151 [2024-07-24 20:24:03.757126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.151 [2024-07-24 20:24:03.764382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.151 [2024-07-24 20:24:03.764889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.151 [2024-07-24 20:24:03.764927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.151 [2024-07-24 20:24:03.764949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.151 [2024-07-24 20:24:03.765240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.151 [2024-07-24 20:24:03.765549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.151 [2024-07-24 20:24:03.765578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.151 [2024-07-24 20:24:03.765597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.151 [2024-07-24 20:24:03.769981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.151 [2024-07-24 20:24:03.779336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.151 [2024-07-24 20:24:03.779850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.151 [2024-07-24 20:24:03.779887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.151 [2024-07-24 20:24:03.779908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.151 [2024-07-24 20:24:03.780200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.151 [2024-07-24 20:24:03.780511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.151 [2024-07-24 20:24:03.780540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.152 [2024-07-24 20:24:03.780566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.152 [2024-07-24 20:24:03.784961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.152 [2024-07-24 20:24:03.794291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.152 [2024-07-24 20:24:03.794810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.152 [2024-07-24 20:24:03.794847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.152 [2024-07-24 20:24:03.794868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.152 [2024-07-24 20:24:03.795160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.152 [2024-07-24 20:24:03.795469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.152 [2024-07-24 20:24:03.795498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.152 [2024-07-24 20:24:03.795517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.152 [2024-07-24 20:24:03.795761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:00.152 [2024-07-24 20:24:03.799912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.152 [2024-07-24 20:24:03.809034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.152 [2024-07-24 20:24:03.809670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.152 [2024-07-24 20:24:03.809720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.152 [2024-07-24 20:24:03.809746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.152 [2024-07-24 20:24:03.810049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.152 [2024-07-24 20:24:03.810353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.152 [2024-07-24 20:24:03.810381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.152 [2024-07-24 20:24:03.810404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.152 [2024-07-24 20:24:03.814805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.152 [2024-07-24 20:24:03.823887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.152 [2024-07-24 20:24:03.824411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.152 [2024-07-24 20:24:03.824458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.152 [2024-07-24 20:24:03.824481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.152 [2024-07-24 20:24:03.824773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.152 [2024-07-24 20:24:03.825071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.152 [2024-07-24 20:24:03.825099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.152 [2024-07-24 20:24:03.825117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.152 [2024-07-24 20:24:03.829510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.152 [2024-07-24 20:24:03.838830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.152 [2024-07-24 20:24:03.839281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.152 [2024-07-24 20:24:03.839319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.152 [2024-07-24 20:24:03.839340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.152 [2024-07-24 20:24:03.839644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.152 [2024-07-24 20:24:03.839941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.152 [2024-07-24 20:24:03.839969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.152 [2024-07-24 20:24:03.839987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.152 [2024-07-24 20:24:03.844369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.152 [2024-07-24 20:24:03.853701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.152 [2024-07-24 20:24:03.854204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.152 [2024-07-24 20:24:03.854242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.152 [2024-07-24 20:24:03.854264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.152 [2024-07-24 20:24:03.854567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.152 [2024-07-24 20:24:03.854865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.152 [2024-07-24 20:24:03.854893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.152 [2024-07-24 20:24:03.854913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.152 [2024-07-24 20:24:03.859294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.152 [2024-07-24 20:24:03.868632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.152 [2024-07-24 20:24:03.869146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.152 [2024-07-24 20:24:03.869188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.152 [2024-07-24 20:24:03.869211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.152 [2024-07-24 20:24:03.869518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.152 [2024-07-24 20:24:03.869819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.152 [2024-07-24 20:24:03.869847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.152 [2024-07-24 20:24:03.869867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.152 [2024-07-24 20:24:03.874262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.152 [2024-07-24 20:24:03.883415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.152 [2024-07-24 20:24:03.884008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.152 [2024-07-24 20:24:03.884056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.152 [2024-07-24 20:24:03.884082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.152 [2024-07-24 20:24:03.884392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.152 [2024-07-24 20:24:03.884715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.152 [2024-07-24 20:24:03.884745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.152 [2024-07-24 20:24:03.884766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.152 [2024-07-24 20:24:03.889153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.152 [2024-07-24 20:24:03.898219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.152 [2024-07-24 20:24:03.898678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.152 [2024-07-24 20:24:03.898716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.152 [2024-07-24 20:24:03.898738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.152 [2024-07-24 20:24:03.899030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.152 [2024-07-24 20:24:03.899327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.153 [2024-07-24 20:24:03.899355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.153 [2024-07-24 20:24:03.899373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.153 [2024-07-24 20:24:03.903768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.153 [2024-07-24 20:24:03.913093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.153 [2024-07-24 20:24:03.913613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.153 [2024-07-24 20:24:03.913651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.153 [2024-07-24 20:24:03.913671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.153 [2024-07-24 20:24:03.913967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.153 [2024-07-24 20:24:03.914264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.153 [2024-07-24 20:24:03.914292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.153 [2024-07-24 20:24:03.914310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.153 [2024-07-24 20:24:03.918705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.153 [2024-07-24 20:24:03.928032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.153 [2024-07-24 20:24:03.928497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.153 [2024-07-24 20:24:03.928535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.153 [2024-07-24 20:24:03.928556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.153 [2024-07-24 20:24:03.928847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.153 [2024-07-24 20:24:03.929144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.153 [2024-07-24 20:24:03.929172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.153 [2024-07-24 20:24:03.929190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.153 [2024-07-24 20:24:03.933600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.413 [2024-07-24 20:24:03.935343] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:00.413 [2024-07-24 20:24:03.935386] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:00.413 [2024-07-24 20:24:03.935405] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:00.413 [2024-07-24 20:24:03.935421] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:00.413 [2024-07-24 20:24:03.935444] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:00.413 [2024-07-24 20:24:03.935747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:00.413 [2024-07-24 20:24:03.935825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:00.413 [2024-07-24 20:24:03.935830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.413 [2024-07-24 20:24:03.942995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.413 [2024-07-24 20:24:03.943616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.413 [2024-07-24 20:24:03.943661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.413 [2024-07-24 20:24:03.943685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.413 [2024-07-24 20:24:03.944003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.413 [2024-07-24 20:24:03.944304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.413 [2024-07-24 20:24:03.944333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.413 [2024-07-24 20:24:03.944354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.413 [2024-07-24 20:24:03.948816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.413 [2024-07-24 20:24:03.957978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.413 [2024-07-24 20:24:03.958580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.413 [2024-07-24 20:24:03.958630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.413 [2024-07-24 20:24:03.958657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.413 [2024-07-24 20:24:03.958959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.413 [2024-07-24 20:24:03.959262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.413 [2024-07-24 20:24:03.959292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.413 [2024-07-24 20:24:03.959313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.413 [2024-07-24 20:24:03.963721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.413 [2024-07-24 20:24:03.972819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.413 [2024-07-24 20:24:03.973435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.413 [2024-07-24 20:24:03.973488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.413 [2024-07-24 20:24:03.973514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.413 [2024-07-24 20:24:03.973827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.413 [2024-07-24 20:24:03.974131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.413 [2024-07-24 20:24:03.974160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.413 [2024-07-24 20:24:03.974182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.413 [2024-07-24 20:24:03.978611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.413 [2024-07-24 20:24:03.987741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.413 [2024-07-24 20:24:03.988331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.413 [2024-07-24 20:24:03.988383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.413 [2024-07-24 20:24:03.988408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.413 [2024-07-24 20:24:03.988720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.413 [2024-07-24 20:24:03.989024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.413 [2024-07-24 20:24:03.989053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.413 [2024-07-24 20:24:03.989075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.413 [2024-07-24 20:24:03.993477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.413 [2024-07-24 20:24:04.002567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.413 [2024-07-24 20:24:04.003121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.413 [2024-07-24 20:24:04.003164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.413 [2024-07-24 20:24:04.003187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.413 [2024-07-24 20:24:04.003542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.413 [2024-07-24 20:24:04.003845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.413 [2024-07-24 20:24:04.003874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.413 [2024-07-24 20:24:04.003895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.413 [2024-07-24 20:24:04.008286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.413 [2024-07-24 20:24:04.017372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.413 [2024-07-24 20:24:04.017984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.413 [2024-07-24 20:24:04.018035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.413 [2024-07-24 20:24:04.018061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.413 [2024-07-24 20:24:04.018362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.413 [2024-07-24 20:24:04.018675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.413 [2024-07-24 20:24:04.018705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.413 [2024-07-24 20:24:04.018728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.413 [2024-07-24 20:24:04.023143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.413 [2024-07-24 20:24:04.032234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.413 [2024-07-24 20:24:04.032800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.413 [2024-07-24 20:24:04.032848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.413 [2024-07-24 20:24:04.032872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.413 [2024-07-24 20:24:04.033171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.413 [2024-07-24 20:24:04.033482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.413 [2024-07-24 20:24:04.033512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.413 [2024-07-24 20:24:04.033531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.413 [2024-07-24 20:24:04.037921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.413 [2024-07-24 20:24:04.046992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.413 [2024-07-24 20:24:04.047461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.413 [2024-07-24 20:24:04.047499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.413 [2024-07-24 20:24:04.047521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.413 [2024-07-24 20:24:04.047814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.413 [2024-07-24 20:24:04.048111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.413 [2024-07-24 20:24:04.048139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.413 [2024-07-24 20:24:04.048158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.414 [2024-07-24 20:24:04.052547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.414 [2024-07-24 20:24:04.061870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.414 [2024-07-24 20:24:04.062377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.414 [2024-07-24 20:24:04.062414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.414 [2024-07-24 20:24:04.062445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.414 [2024-07-24 20:24:04.062740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.414 [2024-07-24 20:24:04.063037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.414 [2024-07-24 20:24:04.063066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.414 [2024-07-24 20:24:04.063084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.414 [2024-07-24 20:24:04.067474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.414 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:00.414 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:30:00.414 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:00.414 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:00.414 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:00.414 [2024-07-24 20:24:04.076576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.414 [2024-07-24 20:24:04.077091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.414 [2024-07-24 20:24:04.077128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.414 [2024-07-24 20:24:04.077150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.414 [2024-07-24 20:24:04.077452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.414 [2024-07-24 20:24:04.077752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.414 [2024-07-24 20:24:04.077781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.414 [2024-07-24 20:24:04.077800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.414 [2024-07-24 20:24:04.082216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.414 [2024-07-24 20:24:04.091306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.414 [2024-07-24 20:24:04.091756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.414 [2024-07-24 20:24:04.091794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.414 [2024-07-24 20:24:04.091815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.414 [2024-07-24 20:24:04.092107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.414 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:00.414 [2024-07-24 20:24:04.092405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.414 [2024-07-24 20:24:04.092443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.414 [2024-07-24 20:24:04.092463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.414 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:00.414 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.414 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:00.414 [2024-07-24 20:24:04.096027] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.414 [2024-07-24 20:24:04.096851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.414 [2024-07-24 20:24:04.109312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.414 [2024-07-24 20:24:04.109982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.414 [2024-07-24 20:24:04.110052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.414 [2024-07-24 20:24:04.110090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.414 [2024-07-24 20:24:04.110582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.414 [2024-07-24 20:24:04.111059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.414 [2024-07-24 20:24:04.111111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.414 [2024-07-24 20:24:04.111160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.414 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.414 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:00.414 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.414 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:00.414 [2024-07-24 20:24:04.115710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.414 [2024-07-24 20:24:04.124267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.414 [2024-07-24 20:24:04.124815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.414 [2024-07-24 20:24:04.124858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.414 [2024-07-24 20:24:04.124882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.414 [2024-07-24 20:24:04.125179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.414 [2024-07-24 20:24:04.125489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.414 [2024-07-24 20:24:04.125519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.414 [2024-07-24 20:24:04.125539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.414 [2024-07-24 20:24:04.129934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.414 [2024-07-24 20:24:04.139025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.414 [2024-07-24 20:24:04.139606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.414 [2024-07-24 20:24:04.139655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.414 [2024-07-24 20:24:04.139680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.414 [2024-07-24 20:24:04.139982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.414 [2024-07-24 20:24:04.140285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.414 [2024-07-24 20:24:04.140314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.414 [2024-07-24 20:24:04.140336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.414 Malloc0 00:30:00.414 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.414 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:00.414 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.414 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:00.414 [2024-07-24 20:24:04.144730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.414 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.414 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:00.414 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.414 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:00.414 [2024-07-24 20:24:04.153805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.414 [2024-07-24 20:24:04.154322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.414 [2024-07-24 20:24:04.154360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6540 with addr=10.0.0.2, port=4420 00:30:00.414 [2024-07-24 20:24:04.154381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6540 is same with the state(5) to be set 00:30:00.414 [2024-07-24 20:24:04.154684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6540 (9): Bad file descriptor 00:30:00.414 [2024-07-24 20:24:04.154981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.414 [2024-07-24 20:24:04.155009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.414 [2024-07-24 20:24:04.155027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.414 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.415 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:00.415 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.415 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:00.415 [2024-07-24 20:24:04.159404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.415 [2024-07-24 20:24:04.161205] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:00.415 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.415 20:24:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2168507 00:30:00.415 [2024-07-24 20:24:04.168743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.673 [2024-07-24 20:24:04.212974] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:10.656 00:30:10.656 Latency(us) 00:30:10.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.656 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:10.656 Verification LBA range: start 0x0 length 0x4000 00:30:10.656 Nvme1n1 : 15.01 4994.49 19.51 5224.68 0.00 12487.77 776.72 30680.56 00:30:10.656 =================================================================================================================== 00:30:10.656 Total : 4994.49 19.51 5224.68 0.00 12487.77 776.72 30680.56 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:10.656 rmmod nvme_tcp 00:30:10.656 rmmod nvme_fabrics 00:30:10.656 rmmod nvme_keyring 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2169287 ']' 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2169287 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 2169287 ']' 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 2169287 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2169287 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2169287' 00:30:10.656 killing process with pid 2169287 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 2169287 00:30:10.656 20:24:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 2169287 00:30:10.656 20:24:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:10.657 20:24:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:10.657 20:24:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:10.657 20:24:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:10.657 20:24:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:10.657 20:24:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.657 20:24:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:10.657 20:24:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:12.560 00:30:12.560 real 0m24.538s 00:30:12.560 user 1m4.265s 00:30:12.560 sys 0m5.225s 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:12.560 ************************************ 00:30:12.560 END TEST nvmf_bdevperf 00:30:12.560 ************************************ 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.560 ************************************ 00:30:12.560 START TEST nvmf_target_disconnect 00:30:12.560 ************************************ 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:12.560 * Looking for test storage... 00:30:12.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.560 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:30:12.561 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:12.561 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:12.561 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:12.561 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:12.561 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:12.561 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:12.561 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:12.561 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:12.561 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:12.561 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:12.561 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:12.561 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:12.561 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:12.561 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:12.561 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:12.561 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:12.561 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:12.561 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.561 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:12.561 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.561 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:12.561 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:12.561 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:30:12.561 20:24:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:15.094 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:15.094 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:30:15.094 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:15.094 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:15.094 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:15.094 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:15.094 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:15.095 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:15.095 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:15.095 Found net devices under 0000:84:00.0: cvl_0_0 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:15.095 Found net devices under 0000:84:00.1: cvl_0_1 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:15.095 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:15.356 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:15.356 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:15.356 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:15.356 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:15.356 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:15.356 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:15.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:15.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:30:15.356 00:30:15.356 --- 10.0.0.2 ping statistics --- 00:30:15.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.356 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:30:15.356 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:15.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:15.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:30:15.356 00:30:15.356 --- 10.0.0.1 ping statistics --- 00:30:15.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.356 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:30:15.356 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:15.356 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:30:15.356 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:15.356 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:15.356 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:15.356 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:15.356 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:15.356 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:15.356 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:15.356 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:15.356 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:15.356 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:15.356 20:24:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:15.356 ************************************ 00:30:15.356 START TEST nvmf_target_disconnect_tc1 00:30:15.356 ************************************ 00:30:15.356 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:30:15.356 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:15.356 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:30:15.356 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:15.356 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:15.356 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:15.356 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:15.356 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:15.356 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:15.356 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:15.356 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:15.356 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:15.356 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:15.356 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.615 [2024-07-24 20:24:19.146080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.615 [2024-07-24 20:24:19.146247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1005790 with addr=10.0.0.2, port=4420 00:30:15.615 [2024-07-24 20:24:19.146329] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:15.615 [2024-07-24 20:24:19.146390] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:15.615 [2024-07-24 20:24:19.146424] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:15.615 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:15.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:15.615 Initializing NVMe Controllers 00:30:15.615 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:30:15.615 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:15.615 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:15.615 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:15.615 00:30:15.615 real 0m0.149s 00:30:15.615 user 0m0.065s 00:30:15.615 sys 0m0.080s 00:30:15.615 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:15.615 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:15.615 ************************************ 00:30:15.615 END TEST nvmf_target_disconnect_tc1 00:30:15.615 ************************************ 00:30:15.615 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:15.615 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:15.615 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:15.615 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:15.615 ************************************ 00:30:15.615 START TEST nvmf_target_disconnect_tc2 00:30:15.615 ************************************ 00:30:15.615 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:30:15.615 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:15.615 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:15.615 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:15.615 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:15.615 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:15.615 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2172960 00:30:15.616 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:15.616 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2172960 00:30:15.616 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2172960 ']' 00:30:15.616 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:15.616 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:15.616 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:15.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:15.616 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:15.616 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:15.616 [2024-07-24 20:24:19.359292] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:30:15.616 [2024-07-24 20:24:19.359489] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:15.875 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.875 [2024-07-24 20:24:19.513531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:16.133 [2024-07-24 20:24:19.730598] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.133 [2024-07-24 20:24:19.730708] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.133 [2024-07-24 20:24:19.730745] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.133 [2024-07-24 20:24:19.730774] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.133 [2024-07-24 20:24:19.730800] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.133 [2024-07-24 20:24:19.730980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:16.133 [2024-07-24 20:24:19.731063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:16.133 [2024-07-24 20:24:19.731119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:16.133 [2024-07-24 20:24:19.731124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:16.133 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:16.133 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:30:16.133 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:16.133 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:16.133 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.133 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:16.133 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:16.133 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.133 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.392 Malloc0 00:30:16.392 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.392 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:16.392 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.392 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.392 [2024-07-24 20:24:19.939961] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:16.392 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.392 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:16.392 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.392 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.392 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.392 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:16.392 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.392 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.392 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.392 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:16.392 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.392 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.392 [2024-07-24 20:24:19.968277] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:16.392 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.392 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:16.392 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.392 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.392 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.392 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2173106 00:30:16.392 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:16.392 20:24:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:16.392 EAL: No free 2048 kB hugepages reported on node 1 00:30:18.299 20:24:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2172960 00:30:18.299 20:24:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:18.299 Read completed with error (sct=0, sc=8) 00:30:18.299 starting I/O failed 00:30:18.299 Read completed with error (sct=0, sc=8) 00:30:18.299 starting I/O failed 00:30:18.299 Read completed with error (sct=0, sc=8) 00:30:18.299 starting I/O failed 00:30:18.299 Read completed with error (sct=0, sc=8) 00:30:18.299 starting I/O failed 00:30:18.299 Read completed with error (sct=0, sc=8) 00:30:18.299 starting I/O failed 00:30:18.299 Read completed with error (sct=0, sc=8) 00:30:18.299 starting I/O failed 00:30:18.299 Read completed with error (sct=0, sc=8) 00:30:18.299 starting I/O failed 00:30:18.299 Read completed with error (sct=0, sc=8) 00:30:18.299 starting I/O failed 00:30:18.299 Read completed with error (sct=0, sc=8) 00:30:18.299 starting I/O failed 00:30:18.299 Read completed with error (sct=0, sc=8) 00:30:18.299 starting I/O failed 00:30:18.299 Read completed with error (sct=0, sc=8) 00:30:18.299 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 [2024-07-24 20:24:21.996624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 [2024-07-24 20:24:21.997347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 [2024-07-24 20:24:21.997831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Write completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.300 Read completed with error (sct=0, sc=8) 00:30:18.300 starting I/O failed 00:30:18.301 Read completed with error (sct=0, sc=8) 00:30:18.301 starting I/O failed 00:30:18.301 Write completed with error (sct=0, sc=8) 00:30:18.301 starting I/O failed 00:30:18.301 Write completed with error (sct=0, sc=8) 00:30:18.301 starting I/O failed 00:30:18.301 Read completed with error (sct=0, sc=8) 00:30:18.301 starting I/O failed 00:30:18.301 Read completed with error (sct=0, sc=8) 00:30:18.301 starting I/O failed 00:30:18.301 Read completed with error (sct=0, sc=8) 00:30:18.301 starting I/O failed 00:30:18.301 Write completed with error (sct=0, sc=8) 00:30:18.301 starting I/O failed 00:30:18.301 Read completed with error (sct=0, sc=8) 00:30:18.301 starting I/O failed 00:30:18.301 Read completed with error (sct=0, sc=8) 00:30:18.301 starting I/O failed 00:30:18.301 Read completed with error (sct=0, sc=8) 00:30:18.301 starting I/O failed 00:30:18.301 Write completed with error (sct=0, sc=8) 00:30:18.301 starting I/O failed 00:30:18.301 Read completed with error (sct=0, sc=8) 00:30:18.301 starting I/O failed 00:30:18.301 [2024-07-24 20:24:21.998406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.301 [2024-07-24 20:24:21.998771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:21.998870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:21.999216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:21.999283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:21.999591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:21.999626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:21.999844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:21.999907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.000196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.000260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.000578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.000612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.000818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.000880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.001148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.001212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.001532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.001566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.001728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.001790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.002104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.002167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.002499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.002534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.002774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.002837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.003151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.003214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.003500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.003533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.003730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.003794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.004043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.004106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.004409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.004494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.004709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.004772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.005065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.005128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.005448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.005511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.005736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.005807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.006069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.006133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.006465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.006516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.006727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.006791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.007099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.007162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.007495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.007530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.007759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.007822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.008132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.008195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.008505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.008540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.008770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.008834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.301 [2024-07-24 20:24:22.009121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.301 [2024-07-24 20:24:22.009184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.301 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.009510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.009545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.009770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.009835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.010072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.010135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.010452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.010508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.010734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.010797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.011090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.011172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.011497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.011533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.011710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.011772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.012080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.012144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.012478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.012514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.012728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.012791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.013050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.013114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.013379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.013483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.013732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.013795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.014110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.014173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.014389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.014423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.014597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.014661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.014929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.014992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.015255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.015289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.015557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.015621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.015939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.016003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.016268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.016303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.016543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.016607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.016904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.016968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.017205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.017239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.017534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.017597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.017868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.017931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.018231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.018265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.018566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.018629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.018903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.018966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.019249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.019284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.019521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.019585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.302 [2024-07-24 20:24:22.019870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.302 [2024-07-24 20:24:22.019935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.302 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.020167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.020201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.020388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.020463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.020739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.020802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.021072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.021106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.021339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.021402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.021694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.021773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.022081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.022115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.022387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.022465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.022749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.022812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.023112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.023146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.023468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.023534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.023801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.023865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.024179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.024236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.024552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.024616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.024867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.024931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.025205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.025240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.025494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.025559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.025840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.025905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.026206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.026240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.026468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.026533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.026782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.026846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.027141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.027175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.027514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.027579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.027842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.027907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.028214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.028248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.028534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.028599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.028873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.028938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.029217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.029251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.029534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.029598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.029905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.029970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.030233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.030267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.030495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.030530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.030787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.030851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.031163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.031197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.031494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.031559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.031864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.031927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.032199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.303 [2024-07-24 20:24:22.032234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.303 qpair failed and we were unable to recover it. 00:30:18.303 [2024-07-24 20:24:22.032474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.032539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.032844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.032907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.033199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.033234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.033478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.033543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.033862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.033926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.034225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.034259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.034526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.034562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.034744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.034807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.035132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.035202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.035487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.035523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.035757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.035821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.036116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.036151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.036422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.036502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.036692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.036727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.037019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.037054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.037248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.037322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.037641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.037676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.037854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.037889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.038078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.038142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.038393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.038471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.038745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.038780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.039060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.039123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.039359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.039422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.039727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.039762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.040023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.040087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.040393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.040475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.304 qpair failed and we were unable to recover it. 00:30:18.304 [2024-07-24 20:24:22.040778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.304 [2024-07-24 20:24:22.040812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.041042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.041105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.041407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.041497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.041809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.041843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.042156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.042219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.042482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.042548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.042865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.042899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.043226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.043290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.043589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.043653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.043922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.043956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.044191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.044255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.044537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.044601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.044874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.044909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.045106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.045169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.045400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.045477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.045788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.045823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.046153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.046216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.046483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.046518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.046725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.046759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.046985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.047048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.047324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.047386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.047668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.047703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.047994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.048057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.048327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.048390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.048708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.048742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.049020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.049083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.049356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.049419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.049761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.049796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.050113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.050176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.050474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.050549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.050839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.050874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.306 qpair failed and we were unable to recover it. 00:30:18.306 [2024-07-24 20:24:22.051054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.306 [2024-07-24 20:24:22.051117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.051423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.051501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.051778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.051813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.052045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.052107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.052382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.052457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.052738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.052773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.052985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.053048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.053354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.053418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.053781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.053815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.054100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.054163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.054459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.054524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.054826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.054861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.055161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.055224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.055533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.055569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.055745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.055780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.055984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.056046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.056335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.056399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.056727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.056761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.057080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.057143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.057405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.057484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.057783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.057817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.058111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.058174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.058456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.058520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.058759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.058794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.059008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.059072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.059369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.059450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.059813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.059875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.060130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.060193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.060460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.060525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.060831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.060866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.061162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.061225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.061488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.061553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.061830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.061864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.062085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.062148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.062459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.062524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.062829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.062863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.063124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.063188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.063501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.307 [2024-07-24 20:24:22.063565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.307 qpair failed and we were unable to recover it. 00:30:18.307 [2024-07-24 20:24:22.063879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.063919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.064209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.064272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.064561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.064596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.064799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.064834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.065052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.065116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.065391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.065484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.065792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.065827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.066105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.066169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.066469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.066535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.066857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.066909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.067177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.067240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.067504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.067570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.067889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.067924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.068251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.068314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.068609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.068644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.068845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.068880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.069112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.069176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.069490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.069555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.069840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.069875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.070084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.070148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.070419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.070500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.070807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.070842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.071169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.071233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.071538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.071604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.071875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.071910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.072150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.072212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.072507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.072573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.072851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.072886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.073108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.073170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.073426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.073521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.073828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.073863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.074104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.074168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.074399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.074441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.074614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.074647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.074861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.074924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.075205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.308 [2024-07-24 20:24:22.075268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.308 qpair failed and we were unable to recover it. 00:30:18.308 [2024-07-24 20:24:22.075567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.309 [2024-07-24 20:24:22.075603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.309 qpair failed and we were unable to recover it. 00:30:18.309 [2024-07-24 20:24:22.075915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.309 [2024-07-24 20:24:22.075978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.309 qpair failed and we were unable to recover it. 00:30:18.309 [2024-07-24 20:24:22.076275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.309 [2024-07-24 20:24:22.076309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.309 qpair failed and we were unable to recover it. 00:30:18.309 [2024-07-24 20:24:22.076606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.309 [2024-07-24 20:24:22.076641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.309 qpair failed and we were unable to recover it. 00:30:18.309 [2024-07-24 20:24:22.076883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.309 [2024-07-24 20:24:22.076960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.309 qpair failed and we were unable to recover it. 00:30:18.309 [2024-07-24 20:24:22.077249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.309 [2024-07-24 20:24:22.077284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.309 qpair failed and we were unable to recover it. 00:30:18.309 [2024-07-24 20:24:22.077492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.309 [2024-07-24 20:24:22.077526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.309 qpair failed and we were unable to recover it. 00:30:18.309 [2024-07-24 20:24:22.077761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.309 [2024-07-24 20:24:22.077824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.309 qpair failed and we were unable to recover it. 00:30:18.309 [2024-07-24 20:24:22.078130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.309 [2024-07-24 20:24:22.078194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.309 qpair failed and we were unable to recover it. 00:30:18.309 [2024-07-24 20:24:22.078511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.309 [2024-07-24 20:24:22.078544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.309 qpair failed and we were unable to recover it. 00:30:18.309 [2024-07-24 20:24:22.078765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.309 [2024-07-24 20:24:22.078828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.309 qpair failed and we were unable to recover it. 00:30:18.309 [2024-07-24 20:24:22.079124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.309 [2024-07-24 20:24:22.079188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.309 qpair failed and we were unable to recover it. 00:30:18.309 [2024-07-24 20:24:22.079506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.309 [2024-07-24 20:24:22.079539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.309 qpair failed and we were unable to recover it. 00:30:18.309 [2024-07-24 20:24:22.079768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.309 [2024-07-24 20:24:22.079831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.309 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.080106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.080169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.080407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.080450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.080637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.080700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.080961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.081024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.081320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.081353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.081593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.081657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.081940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.081974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.082198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.082231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.082492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.082542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.082772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.082836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.083110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.083159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.083340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.083373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.083667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.083702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.084034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.084068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.084402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.084483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.084798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.084861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.085175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.085209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.085508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.085573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.085871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.085935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.086240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.086275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.086584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.086648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.086955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.087019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.087296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.087331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.087545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.087610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.087881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.087944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.088235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.088269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.088494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.088559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.088792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.088856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.089157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.089192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.089522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.089588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.089872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.089945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.090251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.090286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.090589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.090654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.090982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.091045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.091312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.091346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.091566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.091630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.091899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.091963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.092253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.092287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.092490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.092525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.092697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.092770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.093053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.093088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.093297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.093360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.093603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.093638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 20:24:22.093842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 20:24:22.093877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.094080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.094143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.094414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.094496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.094807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.094841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.095135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.095198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.095445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.095510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.095752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.095785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.095975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.096038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.096342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.096375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.096612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.096646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.096837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.096900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.097161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.097224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.097521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.097557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.097857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.097890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.098218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.098281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.098579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.098615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.098931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.098994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.099360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.099423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.099714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.099749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.099954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.100017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.100329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.100393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.100705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.100740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.101017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.101079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.101377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.101457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.101780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.101835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.102139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.102201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.102486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.102521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.102731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.102771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.103080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.103144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.103415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.103493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.103732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.103767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.103962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.104025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.104360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.104423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.104681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.104715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.104894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.104957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.105292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.105356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.105674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.105709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.105953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.106016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.106299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.106364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.106702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.106771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.107043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.107107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.107383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.107465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.107781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.107815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.108133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.108196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.108445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.108510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.108819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.108854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.109184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.109248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.109548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.109614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.109905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.109940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.110151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.110214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.110504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.110569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.110863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.110898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.111214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.111276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.111547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.111583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.111824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.111877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.112185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.112247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.112552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.112617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.112912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.112946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.113213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.113276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.113575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.113640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.113883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.113917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.114080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.114144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.114376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.114462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.114768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.114802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.115058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.115121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.115394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.115476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.115797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.115831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.116089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.116162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.116397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.116474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.116775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.116810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.117119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.117182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.117484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 20:24:22.117549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 20:24:22.117852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.117887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.118189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.118252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.118519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.118584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.118881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.118915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.119206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.119269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.119576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.119640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.119988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.120062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.120361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.120425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.120712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.120775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.121095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.121130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.121456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.121521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.121807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.121870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.122182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.122217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.122454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.122519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.122771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.122834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.123093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.123127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.123315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.123379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.123646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.123681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.123890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.123925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.124138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.124200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.124463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.124528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.124758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.124793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.124997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.125060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.125297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.125360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.125633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.125668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.125812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.125875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.126115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.126180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.126453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.126498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.126709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.126775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.127013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.127076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.127344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.127380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.127638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.127705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.127971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.128035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.128294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.128330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.128542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.128609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.128864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.128937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.129204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.129239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.129456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.129520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.129787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.129851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.130107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.130142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.130339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.130404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.130663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.130727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.130993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.131028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.131204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.131280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.131535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.131570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.131749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.131784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.132000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.132064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.132328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.132363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.132569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.132605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.132763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.132798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.132973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.133031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.133287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.133351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.133616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.133654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.133841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.133906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.134151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.134217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.134451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.134508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.134716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.134779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.135040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.135074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.135283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.135347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.135577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.135612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.135737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.135771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 20:24:22.135923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 20:24:22.135988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.136249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.136313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.136552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.136589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.136783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.136846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.137072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.137136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.137393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.137474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.137641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.137696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.137944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.138007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.138255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.138319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.138589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.138623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.138836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.138900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.139172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.139235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.139507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.139542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.139711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.139775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.140001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.140036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.140230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.140294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.140545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.140581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.140853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.140932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.141308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.141371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.141623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.141659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.141923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.141957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.142181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.142245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.142474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.142510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.142655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.142691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.142925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.142988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.143257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.143320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.143593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.143628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.143883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.143947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.144214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.144278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.144523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.144558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.144727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.144791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.145008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.145072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.145352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.145416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.145662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.145732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.145974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.146037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.146336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.146399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.146618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.146653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.146904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.146967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.147250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.147313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.147559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.147594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.147793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.147856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.148092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.148131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.148358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.148422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.148648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.148703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.148946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.148981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.149177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.149240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.149487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.149522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.149735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.149769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.149992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.150056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.150316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.150378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.150627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.150662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.150898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.150962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.151217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.151281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.151533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.151569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.151786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.151850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.152124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.152189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.152443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.152478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.152652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.152720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.152991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.153055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.153336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.153400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.153645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.153716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.153944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.154007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.154290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.154354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.154604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.154639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.154834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.154898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.155157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.155222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.155489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.155524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.155720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.155784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.156071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.156106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.156257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.156321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.156600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.156635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 20:24:22.156830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 20:24:22.156865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.157029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.157092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.157371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.157448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.157643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.157689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.157897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.157961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.158192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.158255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.158511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.158546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.158751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.158815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.159099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.159162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.159418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.159499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.159648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.159718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.160022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.160085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.160367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.160448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.160710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.160773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.161054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.161118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.161412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.161501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.161743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.161806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.162081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.162145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.162378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.162412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.162615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.162679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.162953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.163016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.163302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.163336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.163567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.163632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.163906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.163970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.164281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.164316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.164593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.164657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.164930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.164993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.165260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.165294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.165531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.165595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.165827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.165890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.166154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.166189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.166388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.166466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.166688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.166750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.167048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.167082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.167315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.167379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.167606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.167640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.167816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.167851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.168064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.168127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.168370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.168450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.168704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.168738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.168972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.169034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.169274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.169337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.169634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.169669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.169852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.169915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.170231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.170295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.170548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.170584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.170778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.170842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.171124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.171187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.171475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.171520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.171764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.171827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.172062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.172135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.172378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.172413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.172636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.172699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.172969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.173032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.173304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.173339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.173562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.173626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.173871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.173934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.174202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.174236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.174496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.174561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.174846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.174909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.175206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.175241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.175499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.175534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.175721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.175783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.176064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.176099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.176313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.176377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.176661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.176696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.176964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.176999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.177221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.177287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.177554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.177590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.177763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.177797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.178007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.178070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 20:24:22.178305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 20:24:22.178369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.178620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.178656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.178866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.178929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.179186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.179250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.179526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.179562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.179730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.179793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.180046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.180110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.180346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.180381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.180581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.180646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.180927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.180990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.181258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.181293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.181531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.181596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.181839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.181902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.182128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.182162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.182382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.182463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.182684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.182747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.183022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.183057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.183305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.183367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.183669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.183704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.184009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.184049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.184324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.184386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.184679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.184744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.185015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.185050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.185271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.185334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.185624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.185659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.185870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.185905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.186136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.186199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.186465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.186530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.186793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.186829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.187050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.187113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.187317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.187381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.187672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.187707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.187903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.187967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.188251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.188315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.188576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.188611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.188809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.188871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.189121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.189185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.189489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.189525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.189761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.189826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.190097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.190161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.190371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.190405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.190634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.190698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.190971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.191034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.191259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.191293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.191506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.191572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.191841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.191905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.192202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.192237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.192423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.192507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.192728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.192792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.193065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.193099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.193265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.193328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.193625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.193660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.193884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.193918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.194153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.194217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.194477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.194542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.194808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.194843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.195093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.195157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.195447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.195512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.195797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.195832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.196085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.196157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.196450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.196515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.196787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.196822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.197020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.197084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.197363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 20:24:22.197446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 20:24:22.197721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.197755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.197955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.198018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.198287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.198350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.198618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.198654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.198840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.198904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.199147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.199210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.199475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.199511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.199756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.199819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.200063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.200125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.200406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.200449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.200665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.200728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.200993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.201057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.201344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.201407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.201665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.201729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.202005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.202069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.202345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.202408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.202707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.202771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.203032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.203094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.203344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.203408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.203672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.203741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.204008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.204071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.204340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.204374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.204617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.204682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.204941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.205004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.205241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.205276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.205509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.205575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.205829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.205892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.206152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.206186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.206372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.206449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.206714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.206778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.207025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.207060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.207269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.207331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.207612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.207648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.207837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.207871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.208127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.208190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.208453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.208528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.208800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.208835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.209041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.209103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.209335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.209399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.209686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.209722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.209932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.209995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.210247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.210310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.210583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.210619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.210826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.210888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.211160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.211224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.211492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.211527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.211798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.211861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.212101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.212164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.212444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.212479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.212736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.212799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.213040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.213103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.213304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.213339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.213583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.213647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.213939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.214002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.214270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.214305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.214527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.214591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.214866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.214929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.215193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.215227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.215479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.215514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.215738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.215801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.216064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.216099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.216329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.216391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.216692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.216756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.217023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.217057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.217295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.217357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.217636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.217671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.217874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.217908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.218132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.218195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.218387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.218465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.218746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.218781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.218995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 20:24:22.219058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 20:24:22.219320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.219384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.219671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.219706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.219936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.219999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.220279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.220343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.220615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.220656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.220887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.220951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.221229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.221292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.221567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.221602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.221842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.221906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.222199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.222262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.222538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.222574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.222792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.222854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.223112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.223175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.223413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.223459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.223673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.223736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.223995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.224058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.224321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.224356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.224587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.224652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.224934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.224998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.225248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.225283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.225461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.225518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.225729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.225792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.226054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.226088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.226246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.226310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.226578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.226643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.226871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.226906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.227136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.227199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.227483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.227547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.227785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.227820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.228036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.228100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.228383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.228463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.228750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.228785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.228991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.229054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.229318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.229382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.229672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.229707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.229927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.229991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.230280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.230344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.230640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.230675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.230881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.230945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.231184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.231248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.231513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.231548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.231740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.231804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.232071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.232134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.232405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.232449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.232665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.232738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.233000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.233063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.233314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.233378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.233625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.233660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.233902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.233965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.234231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.234266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.234486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.234521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.234730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.234793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.235058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 20:24:22.235093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 20:24:22.235288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.235353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.235633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.235668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.235837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.235872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.236081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.236144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.236379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.236462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.236732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.236766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.236981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.237045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.237246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.237309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.237564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.237599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.237784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.237847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.238116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.238180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.238425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.238469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.238679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.238743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.238973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.239035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.239301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.239336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.239549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.239614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.239877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.239939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.240179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.240214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.240420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.240499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.240758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.240821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.241102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.241137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.241350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.241414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.241694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.241757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.242031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.242066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.242286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.242349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.242614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.242650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.242868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.242903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.243161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.243223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.243484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.243550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.243794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.243829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.244040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.244103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 20:24:22.244336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 20:24:22.244408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.244724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.244759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.244999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.245063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.245344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.245407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.245696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.245731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.245933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.245996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.246262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.246325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.246599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.246634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.246861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.246925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.247188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.247251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.247506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.247542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.247717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.247752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.247961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.247996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.248203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.248238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.248415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.248458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.248673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.248707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.248933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.248968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.249187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.249221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.249416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.249460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.249632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.249668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.249846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.249880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.250053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.250088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.250343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.250417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.250656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.250693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.250909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.250944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.251154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.251211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.251416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.251462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.251676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.251735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.251944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.251981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.252170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.252206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.252462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.252517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.252766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.252832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.253066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.253129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.253461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.253519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.253731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.253766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.253975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.254010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.254190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.254263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.254522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.254558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 20:24:22.254773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 20:24:22.254815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.255035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.255070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.255250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.255286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.255473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.255508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.255684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.255720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.255942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.255977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.256176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.256217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.256460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.256519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.256720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.256786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.257038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.257101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.257416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.257505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.257727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.257762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.258060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.258125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.258384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.258481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.258674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.258710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.258938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.258981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.259231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.259305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.259524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.259567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.259747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.259783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.260004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.260039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.260322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.260386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.260616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.260656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.260866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.260901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.261053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.261089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.261331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.261394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.261639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.261678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.261888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.261924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.262151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.262216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.262488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.262531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.262752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.262788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.262997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.263069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.263352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.263415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.263656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.263692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.263857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.263893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.264096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.264132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.264381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.264481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.264711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.264747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.264951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.264987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.265176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 20:24:22.265240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 20:24:22.265476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.265513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.265727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.265763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.265965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.266000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.266202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.266266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.266510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.266547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.266695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.266730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.266948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.266983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.267236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.267299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.267574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.267611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.267801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.267836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.268016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.268059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.268287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.268358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.268614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.268650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.268806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.268842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.269013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.269049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.269271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.269333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.269630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.269667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.269865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.269901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.270072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.270136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.270401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.270454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.270638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.270673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.270880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.270915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.271107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.271149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.271341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.271376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.271615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.271652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.271824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.271858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.272075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.272144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.272470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.272523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.272706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.272742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.272978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.273041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.273273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.273338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.273620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.273658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.273860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.273925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.274157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.274219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.274471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.274514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.274740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.274774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.274992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.275057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 20:24:22.275351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 20:24:22.275387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.275593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.275629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.275800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.275839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.276024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.276058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.276268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.276347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.276655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.276690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.276894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.276929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.277189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.277259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.277511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.277554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.277780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.277821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.278053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.278116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.278386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.278467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.278704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.278741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.278945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.278986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.279217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.279280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.279546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.279582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.279796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.279834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.280056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.280125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.280382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.280420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.280639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.280674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.280880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.280916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.281121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.281156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.281385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.281421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.281679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.281715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.281893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.281928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.282125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.282191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.282396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.282481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.282690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.282726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.282928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.282992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.283240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.283304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.283608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.283644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.283855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 20:24:22.283921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 20:24:22.284197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.284260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.284542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.284578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.284727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.284762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.284983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.285018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.285238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.285280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.285476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.285511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.285654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.285690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.285854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.285898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.286134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.286210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.286503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.286539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.286721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.286757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.286980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.287043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.287310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.287375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.287650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.287686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.287860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.287896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.288103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.288166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.288439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.288476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.288686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.288721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.288910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.288946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.289209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.289244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.289482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.289519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.289693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.289770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.290039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.290073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.290300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.290365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.290632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.290667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.290977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.291012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.291295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.291360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.291651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.291687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.291885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.291921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.292145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.292208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.292450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.292506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.292735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.292771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.292952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.292987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.293226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.293288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.293557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.293594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.293781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.293818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.294008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.294082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.294346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.294384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 20:24:22.294580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 20:24:22.294616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.294819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.294855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.295074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.295117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.295393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.295499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.295719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.295763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.295945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.295980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.296211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.296302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.296615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.296651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.296859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.296895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.297087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.297149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.297420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.297506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.297726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.297761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.297994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.298058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.298321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.298384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.298645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.298681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.298860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.298894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.299117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.299182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.299450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.299493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.299701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.299737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.299928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.299964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.300179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.300214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.300491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.300527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.300708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.300743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.300946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.300981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.301155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.301218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.301502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.301552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.301781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.301818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.302047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.302111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.302374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.302466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.302718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.302754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.302978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.303043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.303316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.303378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.303643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.303680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.303856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.303891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.304098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.304133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.304318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.304354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.304522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.304558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.304738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.304776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 20:24:22.304984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 20:24:22.305019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.305288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.305355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.305608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.305644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.305848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.305884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.306125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.306188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.306459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.306515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.306712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.306747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.306909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.306944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.307157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.307220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.307458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.307500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.307678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.307713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.307916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.307951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.308136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.308180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.308413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.308461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.308622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.308658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.308900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.308935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.309106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.309185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.309483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.309518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.309656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.309698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.309930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.309994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.310229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.310295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.310567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.310603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.310785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.310828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.310982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.311055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.311319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.311360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.311563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.311599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.311809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.311875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.312143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.312176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.312443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.312506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.312723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.312757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.312969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.313004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.313270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.313339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.313596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.313632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.313802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.313845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.314065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.314129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.314387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.314479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.314708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.314749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.314954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.314990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.315173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 20:24:22.315236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 20:24:22.315500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.315537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.315740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.315775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.315954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.315989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.316166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.316200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.316395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.316446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.316655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.316699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.316909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.316944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.317202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.317277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.317519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.317554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.317756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.317794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.318023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.318086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.318323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.318400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.318671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.318707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.318890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.318925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.319116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.319179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.319457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.319494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.319691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.319725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.319947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.319983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.320223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.320259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.320449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.320484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.320699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.320734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.320916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.320950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.321175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.321240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.321489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.321525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.321725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.321761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.321980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.322043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.322304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.322380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.322642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.322677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.322904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.322969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.323204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.323267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.323539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.323578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.323752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.323787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.323973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.324009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.324178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.324220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.324448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.324484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.324667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.324703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.324878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.324912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.325105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.325174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 20:24:22.325475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 20:24:22.325517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.325738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.325774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.326033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.326096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.326364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.326444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.326711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.326746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.326930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.326965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.327225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.327288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.327542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.327579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.327783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.327820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.328029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.328093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.328363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.328400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.328631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.328666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.328890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.328926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.329088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.329123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.329328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.329363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.329569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.329612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.329834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.329869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.330149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.330226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.330496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.330532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.330741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.330777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.331009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.331072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.331309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.331376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.331676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.331712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.331945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.332023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.332295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.332357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.332639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.332675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.332914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.332978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.333271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.333345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.333629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.333673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.333961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.334024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.334287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.334361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.334711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.334747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.334963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 20:24:22.335028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 20:24:22.335263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.335326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.335591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.335628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.335848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.335886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.336126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.336189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.336478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.336514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.336710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.336744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.336947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.336982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.337121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.337155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.337338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.337416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.337651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.337686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.337873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.337908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.338105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.338167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.338394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.338486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.338705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.338739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.338948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.339012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.339271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.339333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.339625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.339662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.339867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.339908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.340066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.340117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.340368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.340404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.340609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.340645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.340862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.340897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.341068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.341102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.341319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.341384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.341668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.341703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.341946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.342003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.342261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.342326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.342644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.342680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.342900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.342935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.343188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.343251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.343518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.343555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.343733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.343768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.343939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.344010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.344241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.344305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.344542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.344579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.344769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.344809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.345008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.345042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.345314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 20:24:22.345352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 20:24:22.345567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.345602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.345803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.345838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.346008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.346043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.346269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.346334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.346615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.346650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.346825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.346860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.347061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.347125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.347391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.347486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.347692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.347726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.347955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.348019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.348285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.348348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.348617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.348652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.348821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.348854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.349048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.349120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.349373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.349411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.349640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.349675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.349898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.349931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.350136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.350170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.350381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.350480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.350701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.350734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.350955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.350989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.351216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.351281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.351585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.351621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.351834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.351884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.352085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.352178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.352478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.352514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.352712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.352746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.352960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.352994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 20:24:22.353197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 20:24:22.353262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.353554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.353590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.353821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.353887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.354130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.354193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.354460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.354497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.354702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.354737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.354968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.355001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.355173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.355213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.355377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.355409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.355620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.355654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.355852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.355906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.356122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.356158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.356320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.356357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.356548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.356583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.356788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.356823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.357000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.357033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.357207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.357240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.357444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.357478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.357692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.357742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.357942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.357991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.358190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.358246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.358423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.358466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.358696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.358757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.359000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.359062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.359262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.359316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.359532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.359568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.359800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.359854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.360060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.360115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.360318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.360352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.360503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.360538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.877 qpair failed and we were unable to recover it. 00:30:18.877 [2024-07-24 20:24:22.360717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.877 [2024-07-24 20:24:22.360780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.360957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.361013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.361243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.361298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.361499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.361560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.361722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.361757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.361990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.362044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.362210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.362267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.362483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.362516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.362691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.362724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.362902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.362935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.363162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.363214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.363391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.363424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.363619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.363673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.363906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.363962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.364155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.364210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.364387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.364420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.364664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.364720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.364954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.365007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.365243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.365301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.365515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.365569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.365747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.365805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.366031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.366085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.366298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.366333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.366555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.366610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.366863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.366924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.367164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.367223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.367394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.367437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.367635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.367690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.367877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.367937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.368138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.368193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.368394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.368437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.368639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.368695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.368910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.368965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.369174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.369240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.369444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.369479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.369667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.878 [2024-07-24 20:24:22.369732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.878 qpair failed and we were unable to recover it. 00:30:18.878 [2024-07-24 20:24:22.369921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.369976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.370171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.370228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.370420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.370464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.370662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.370724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.370945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.371001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.371208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.371266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.371489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.371538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.371728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.371780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.371965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.372022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.372208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.372268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.372481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.372516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.372721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.372777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.372960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.373025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.373222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.373260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.373489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.373545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.373749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.373803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.374022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.374076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.374279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.374314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.374534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.374586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.374777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.374832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.375025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.375079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.375286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.375320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.375518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.375575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.375770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.375825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.376018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.376074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.376273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.376307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.376532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.376588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.376782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.376837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.377025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.377078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.377281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.377315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.377503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.377561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.377782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.377836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.378050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.378105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.378304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.378338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.378581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.378636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.378863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.378918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.379116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.879 [2024-07-24 20:24:22.379171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.879 qpair failed and we were unable to recover it. 00:30:18.879 [2024-07-24 20:24:22.379328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.379363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.379601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.379658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.379881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.379934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.380155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.380223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.380444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.380479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.380714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.380770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.380999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.381054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.381274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.381329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.381510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.381545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.381732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.381787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.382014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.382069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.382275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.382310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.382514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.382570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.382899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.382952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.383253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.383312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.383562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.383617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.383833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.383888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.384069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.384122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.384295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.384329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.384543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.384600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.384822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.384876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.385098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.385152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.385361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.385395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.385639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.385699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.385895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.385950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.386130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.386184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.386387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.386422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.386625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.386685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.386907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.386962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.387154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.387210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.387412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.387455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.387653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.387720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.387935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.387989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.388265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.388323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.388528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.388563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.388777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.880 [2024-07-24 20:24:22.388836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.880 qpair failed and we were unable to recover it. 00:30:18.880 [2024-07-24 20:24:22.389048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.389081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.389287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.389321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.389511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.389545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.389727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.389760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.389962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.389995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.390165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.390198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.390409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.390449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.390670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.390733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.390950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.391005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.391183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.391238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.391444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.391479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.391710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.391745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.391961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.392017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.392238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.392294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.392475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.392511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.392740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.392795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.393039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.393094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.393282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.393316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.393544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.393599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.393821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.393876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.394063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.394119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.394324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.394359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.394574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.394608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.394833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.394889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.395070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.395120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.395293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.395327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.395485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.395546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.395766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.395820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.396042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.396095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.396296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.396330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.396544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.881 [2024-07-24 20:24:22.396599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.881 qpair failed and we were unable to recover it. 00:30:18.881 [2024-07-24 20:24:22.396770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.396829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.397059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.397113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.397286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.397321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.397543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.397596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.397792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.397846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.398066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.398120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.398324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.398358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.398578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.398633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.398844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.398898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.399160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.399214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.399504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.399539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.399733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.399786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.400049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.400102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.400327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.400362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.400590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.400625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.400791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.400846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.401009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.401064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.401283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.401317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.401513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.401568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.401780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.401835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.402022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.402075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.402259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.402293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.402508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.402564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.402778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.402834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.402999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.403053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.403262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.403296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.403473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.403508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.403699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.403750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.403954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.403988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.404189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.404223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.404423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.404472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.404660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.404715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.404932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.404986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.405204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.405257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.405493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.405561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.405762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.405815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.406033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 20:24:22.406086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 20:24:22.406269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.406303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.406484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.406556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.406781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.406838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.407057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.407117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.407335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.407369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.407590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.407647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.407868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.407921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.408107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.408162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.408366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.408400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.408563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.408619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.408838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.408892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.409073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.409128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.409301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.409334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.409508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.409562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.409781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.409835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.409995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.410051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.410252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.410287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.410439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.410475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.410699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.410753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.410948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.411002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.411206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.411240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.411395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.411437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.411666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.411728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.411951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.412005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.412226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.412280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.412506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.412568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.412792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.412845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.413040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.413097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.413272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.413306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.413496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.413559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.413781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.413836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.414057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.414111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.414289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.414324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.414533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.414588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.414815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 20:24:22.414871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 20:24:22.415055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.415107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.415282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.415316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.415500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.415561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.415748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.415803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.415993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.416048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.416218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.416253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.416463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.416498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.416720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.416775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.416997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.417056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.417233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.417267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.417467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.417502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.417723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.417777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.417990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.418043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.418243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.418278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.418479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.418545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.418768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.418823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.419051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.419106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.419320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.419355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.419546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.419601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.419798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.419858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.420061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.420117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.420320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.420355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.420582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.420635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.420858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.420912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.421109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.421163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.421336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.421370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.421598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.421652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.421874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.421928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.422113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.422165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.422365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.422399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.422633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.422693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.422879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.422931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.423147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.423203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.423375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.423409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.423644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.423705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.423934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.423989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.424208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 20:24:22.424263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 20:24:22.424490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.424525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.424717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.424771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.425000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.425056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.425263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.425297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.425471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.425506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.425730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.425785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.425969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.426025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.426196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.426230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.426406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.426447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.426647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.426715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.426986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.427041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.427267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.427327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.427514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.427576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.427774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.427828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.428049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.428102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.428303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.428338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.428524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.428579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.428807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.428861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.429084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.429140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.429338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.429373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.429597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.429654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.429880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.429935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.430144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.430197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.430401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.430444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.430630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.430663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.430906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.430958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.431155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.431190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.431419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.431460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.431699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.431733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.431875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.431930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.432104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.432159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.432371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.432405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.432610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.432666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.432888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.432943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.433150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.433205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.433373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.433407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.433567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.433621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 20:24:22.433845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 20:24:22.433900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.434137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.434192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.434454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.434489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.434714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.434768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.434957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.435010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.435231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.435286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.435460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.435495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.435694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.435759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.435982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.436037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.436206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.436240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.436416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.436457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.436654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.436712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.436909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.436961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.437177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.437232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.437504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.437578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.437785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.437833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.438053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.438108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.438307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.438362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.438563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.438598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.438859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.438913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.439154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.439208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.439413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.439455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.439715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.439749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.440018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.440077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.440300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.440352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.440558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.440593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.440818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.440878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.441089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.441148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.441296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.441331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.441552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.441606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.441835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.441896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.442067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 20:24:22.442120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 20:24:22.442333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.442367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.442554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.442610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.442827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.442880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.443057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.443090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.443305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.443339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.443531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.443587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.443813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.443879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.444078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.444136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.444390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.444424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.444660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.444719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.444942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.444997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.445211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.445265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.445467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.445519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.445748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.445804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.446102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.446163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.446403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.446443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.446656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.446708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.446901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.446955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.447184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.447239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.447412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.447455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.447658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.447692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.447871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.447925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.448148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.448209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.448460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.448496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.448699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.448734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.448917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.448974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.449193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.449248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.449458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.449493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.449700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.449735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.449932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.449986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.450185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.450239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.450443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.450478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.450658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.450693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.450894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.450946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.451149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.451205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.451386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.451420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.451643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.451677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 20:24:22.451874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 20:24:22.451927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.452143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.452199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.452402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.452453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.452669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.452703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.452900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.452956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.453141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.453194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.453402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.453445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.453650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.453683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.453882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.453936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.454130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.454185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.454384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.454418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.454598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.454653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.454853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.454908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.455134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.455188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.455362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.455396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.455624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.455681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.455889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.455943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.456163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.456218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.456417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.456470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.456645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.456712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.456935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.456989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.457173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.457228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.457396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.457439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.457663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.457721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.457936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.457989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.458170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.458232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.458411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.458453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.458685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.458741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.458960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.459013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.459228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.459283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.459483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.459551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.459776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.459835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.460032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.460086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.460286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.460321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.460535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.460590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.460809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.460866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.461030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.461086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.461249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 20:24:22.461283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 20:24:22.461486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.461520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.461730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.461764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.461973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.462007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.462211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.462245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.462521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.462555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.462785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.462842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.463059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.463113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.463285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.463319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.463535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.463591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.463814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.463871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.464088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.464142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.464359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.464393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.464593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.464647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.464873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.464926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.465146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.465201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.465372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.465406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.465630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.465692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.465915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.465967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.466167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.466220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.466397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.466437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.466662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.466718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.466906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.466960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.467181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.467235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.467440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.467475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.467706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.467758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.467953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.468007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.468225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.468278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.468457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.468497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.468723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.468779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.468971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.469027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.469220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.469273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.469468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.469528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.469722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.469781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.469996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.470050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.470249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.470282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.470435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 20:24:22.470470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 20:24:22.470682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.470740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.470942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.470997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.471207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.471240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.471443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.471477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.471651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.471721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.471922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.471976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.472195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.472251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.472459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.472494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.472721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.472773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.472991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.473046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.473271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.473327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.473548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.473583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.473784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.473839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.474016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.474069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.474234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.474267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.474471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.474527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.474751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.474804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.474962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.475017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.475230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.475263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.475426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.475506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.475678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.475734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.475949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.476003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.476142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.476176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.476373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.476407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.476638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.476694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.476922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.476976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.477199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.477253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.477476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.477512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.477702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.477764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.477956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.478010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.478203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.478256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.478460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.478500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.478701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.478759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.478988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.479041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.479232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.479288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.479513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.479575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.479785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 20:24:22.479841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 20:24:22.480056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.480110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.480309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.480343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.480569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.480625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.480853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.480907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.481137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.481192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.481405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.481448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.481669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.481730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.481928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.481982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.482203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.482257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.482441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.482476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.482707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.482760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.482935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.482989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.483181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.483236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.483442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.483477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.483705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.483758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.483967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.484020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.484245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.484299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.484509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.484544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.484752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.484807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.484949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.485005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.485192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.485247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.485465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.485500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.485691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.485751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.485970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.486024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.486221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.486274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.486498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.486553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.486770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.486826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.487000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.487054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.487261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.487295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.487520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.487576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.487810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.487873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 20:24:22.488060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 20:24:22.488119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.488321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.488355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.488557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.488613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.488806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.488868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.489098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.489152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.489323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.489357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.489550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.489607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.489823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.489879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.490093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.490146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.490343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.490377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.490605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.490661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.490878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.490932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.491154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.491209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.491422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.491463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.491659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.491727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.491950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.492009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.492197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.492252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.492462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.492497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.492728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.492782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.492936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.492990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.493204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.493257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.493458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.493493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.493723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.493779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.494009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.494064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.494253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.494309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.494530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.494586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.494804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.494857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.495062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.495121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.495364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.495398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.495634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.495689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.495916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.495971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.496182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.496235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.496376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.496410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.496626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.496679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.496889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.496945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.497157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.497218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.497444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.497478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 20:24:22.497653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 20:24:22.497687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.497895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.497953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.498173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.498244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.498485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.498545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.498776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.498838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.499047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.499105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.499325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.499368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.499567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.499634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.499867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.499928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.500147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.500199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.500398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.500448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.500637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.500706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.500896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.500951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.501179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.501235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.501452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.501488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.501784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.501846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.502070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.502125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.502310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.502344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.502559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.502595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.502813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.502869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.503093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.503154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.503363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.503397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.503583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.503640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.503863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.503918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.504155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.504214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.504395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.504436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.504621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.504656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.504877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.504933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.505155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.505210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.505413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.505470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.505674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.505709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.505924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.505983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.506207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.506265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.506483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.506519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.506731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.506787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.506976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.507033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.507234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.507270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.507449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 20:24:22.507485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 20:24:22.507710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.507763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.507948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.508005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.508217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.508252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.508439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.508474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.508700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.508760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.508974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.509031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.509267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.509319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.509511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.509566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.509781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.509847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.510043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.510098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.510305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.510341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.510558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.510614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.510836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.510890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.511115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.511183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.511389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.511425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.511658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.511713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.511948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.512004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.512284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.512341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.512642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.512677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.512896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.512951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.513178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.513238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.513454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.513489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.513675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.513736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.513921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.513974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.514195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.514262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.514493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.514531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.514701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.514757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.514942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.514999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.515172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.515206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.515410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.515452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.515649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.515710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.515939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.515992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.516202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.516259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.516492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.516557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.516753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.516805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.517032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.517097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 20:24:22.517300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 20:24:22.517334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.517498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.517533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.517720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.517778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.517923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.517978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.518162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.518215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.518422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.518473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.518697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.518756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.518976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.519031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.519213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.519270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.519488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.519523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.519707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.519764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.519996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.520053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.520225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.520259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.520494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.520551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.520764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.520823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.521048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.521101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.521270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.521303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.521492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.521552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.521771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.521822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.522048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.522105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.522290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.522324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.522538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.522595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.522834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.522892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.523122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.523177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.523351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.523386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.523555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.523611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.523838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.523897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.524095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.524163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.524365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.524400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.524627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.524692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.524910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.524966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.525130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.525187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.525389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.525424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.525654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.525715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.525942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.526000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.526220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.526284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.526500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.526563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.526784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 20:24:22.526839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 20:24:22.527067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.527139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.527350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.527391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.527616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.527686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.527908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.527962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.528192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.528246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.528460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.528496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.528710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.528771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.529000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.529056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.529254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.529311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.529536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.529592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.529807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.529861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.530085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.530139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.530347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.530381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.530550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.530607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.530800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.530856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.531046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.531104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.531273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.531308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.531519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.531576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.531750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.531788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.531951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.531988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.532182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.532217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.532439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.532486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.532652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.532699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.532931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.532977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.533189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.533250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.533457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.533521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.533779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.533819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.534043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.534100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.534290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.534325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.534539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.534597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.534831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 20:24:22.534888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 20:24:22.535103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.535159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.535330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.535365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.535559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.535614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.535796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.535852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.536087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.536125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.536342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.536377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.536557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.536614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.536838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.536894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.537134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.537195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.537421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.537465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.537659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.537721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.537918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.537974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.538175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.538236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.538438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.538473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.538626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.538683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.538899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.538955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.539172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.539249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.539460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.539496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.539664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.539722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.539940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.539993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.540171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.540224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.540409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.540455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.540645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.540711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.540930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.540994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.541198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.541261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.541482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.541541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.541751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.541805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.542027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.542086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.542272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.542306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.542501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.542560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.542754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.542810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.543009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.543075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.543286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.543321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.543513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.543572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.543794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.543868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.544056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.544112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.544292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.544327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 20:24:22.544549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 20:24:22.544606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.544813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.544872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.545109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.545170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.545383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.545417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.545719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.545788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.546011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.546069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.546297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.546331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.546558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.546615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.546834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.546891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.547112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.547169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.547342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.547379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.547622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.547681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.547915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.547976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.548186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.548252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.548459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.548494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.548714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.548772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.548961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.549020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.549235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.549290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.549501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.549560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.549752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.549806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.550036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.550091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.550264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.550299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.550515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.550572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.550765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.550822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.551031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.551068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.551286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.551321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.551540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.551597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.551830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.551900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.552100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.552153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.552335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.552373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.552613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.552670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.552874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.552935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.553186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.553221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.553443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.553478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.553690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.553757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.553967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.554029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.554248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.554304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 20:24:22.554481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 20:24:22.554516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.554808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.554906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.555200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.555271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.555579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.555616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.555820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.555898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.556184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.556252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.556559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.556597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.556791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.556855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.557127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.557189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.557479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.557516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.557717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.557762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.558000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.558063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.558329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.558403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.558677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.558713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.558866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.558902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.559076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.559140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.559414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.559510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.559731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.559766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.559907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.559942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.560164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.560227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.560507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.560545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.560787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.560823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.561015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.561051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.561282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.561347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.561606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.561641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.561845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.561882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.562069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.562134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.562348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.562422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.562696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.562759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.563010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.563087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.563363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.563452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.563687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.563761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.564010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.564073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.564352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.564426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.564691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.564726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.564894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.564930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.565126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.565188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.565478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.565515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.565703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 20:24:22.565737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 20:24:22.565936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.565971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.566191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.566264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.566536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.566572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.566787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.566823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.566979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.567044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.567313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.567392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.567634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.567669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.567856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.567922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.568116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.568179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.568418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.568511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.568664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.568700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.568904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.568939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.569161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.569224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.569489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.569526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.569773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.569809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.570090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.570165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.570450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.570508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.570671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.570744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.571025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.571073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.571296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.571332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.571505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.571541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.571707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.571770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.572042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.572080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.572291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.572326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.572536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.572572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.572779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.572820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.573036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.573070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.573348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.573414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.573677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.573711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.573965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.574030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 20:24:22.574265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 20:24:22.574300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.574508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.574545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.574728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.574762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.574952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.575020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.575296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.575335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.575535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.575570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.575744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.575779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.575997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.576060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.576318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.576354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.576560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.576595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.576778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.576814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.577052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.577115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.577381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.577417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.577635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.577669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.577905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.577969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.578231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.578303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.578548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.578586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.578760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.578800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.579048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.579110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.579396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.579502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.579722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.579758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.579972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.580012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.580320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.580385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.580611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.580647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.580901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.580936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.581128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.581191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.581421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.581513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.581717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.581751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.581954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.581988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.582211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.582283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.582516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.582581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.582845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.582907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.583179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.583213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.583466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.583530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.583805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.583867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.584134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.584197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.584436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.584472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.584697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.584759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 20:24:22.585030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 20:24:22.585092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.585358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.585420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.585691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.585726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.585912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.585976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.586237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.586299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.586560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.586596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.586768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.586803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.586997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.587059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.587310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.587372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.587638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.587673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.587878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.587912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.588134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.588196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.588458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.588525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.588741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.588804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.589075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.589110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.589316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.589379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.589675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.589710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.589919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.589983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.590225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.590264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.590462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.590531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.590720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.590799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.591063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.591126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.591396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.591438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.591645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.591696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.591960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.592022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.592296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.592358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.592636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.592671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.592929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.592992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.593252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.593316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.593577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.593612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.593805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.593840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.594077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.594139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.594450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.594520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.594747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.594810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.595078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.595113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.595312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.595374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.595702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.595777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.596008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.596070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.596339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 20:24:22.596373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 20:24:22.596619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.596654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.596840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.596902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.597171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.597233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.597490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.597526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.597717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.597780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.598035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.598097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.598324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.598387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.598668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.598703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.598889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.598951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.599182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.599244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.599514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.599549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.599754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.599788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.599992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.600054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.600309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.600371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.600657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.600692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.600901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.600935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.601160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.601222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.601489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.601543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.601754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.601817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.602057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.602091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.602301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.602373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.602662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.602696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.602959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.603021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.603267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.603329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.603587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.603621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.603852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.603915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.604183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.604246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.604483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.604518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.604723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.604785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.605058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.605120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.605375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.605451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.605667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.605702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.605913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.605976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.606283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.606344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.606628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.606663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.606863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.606897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.607105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.607167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.607400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 20:24:22.607478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 20:24:22.607714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.607789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.608064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.608098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.608310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.608374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.608651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.608685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.608866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.608927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.609199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.609233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.609485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.609540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.609776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.609838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.610095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.610157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.610419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.610467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.610650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.610717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.610981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.611043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.611266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.611328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.611621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.611656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.611895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.611958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.612225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.612287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.612551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.612586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.612789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.612824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.613055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.613117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.613423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.613502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.613709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.613768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.614032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.614066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.614274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.614336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.614647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.614682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.614931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.614992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.615258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.615291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.615514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.615549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.615748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.615809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.616072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.616133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.616366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.616400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.616587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.616621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.616784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.616846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.617077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.617139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.617399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.617441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.617630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.617679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.617923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.617984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.618272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.618334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.618633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 20:24:22.618669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 20:24:22.618897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.618960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.619214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.619275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.619514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.619549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.619695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.619729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.619950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.620012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.620250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.620312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.620571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.620605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.620816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.620850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.621061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.621122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.621380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.621453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.621670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.621704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.621977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.622012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.622185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.622256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.622509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.622543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.622719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.622791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.623101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.623135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.623403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.623490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.623638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.623672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.623871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.623933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.624204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.624238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.624473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.624536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.624794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.624855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.625083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.625145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.625387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.625421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.625577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.625611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.625781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.625842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.626091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.626154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.626403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.626450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.626650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.626720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.626977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.627039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.627294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.627355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 20:24:22.627598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 20:24:22.627632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.627856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.627917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.628166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.628228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.628462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.628526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.628732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.628766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.628955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.629018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.629285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.629346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.629625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.629659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.629837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.629880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.630116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.630178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.630448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.630517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.630744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.630805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.631070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.631103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.631293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.631355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.631605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.631639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.631829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.631891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.632141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.632175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.632326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.632388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.632668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.632702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.633023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.633085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.633335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.633396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.633658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.633692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.633898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.633960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.634226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.634288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.634558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.634593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.634786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.634849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.635118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.635180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.635465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.635528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.635789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.635823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.636019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.636081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.636323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.636385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.636664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.636698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.636941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.636975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.637186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.637248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.637516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.637550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.637753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.637815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.638070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 20:24:22.638104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 20:24:22.638304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.638365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.638636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.638670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.638878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.638939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.639182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.639215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.639422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.639516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.639661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.639695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.639953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.640015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.640251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.640286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.640500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.640535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.640762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.640825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.641084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.641146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.641390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.641423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.641645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.641683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.641897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.641930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.642102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.642134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.642320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.642367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.642532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.642565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.642779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.642842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.643098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.643175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.643313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.643345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.643549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.643582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.643746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.643779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.643940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.643972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.644165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.644198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.644367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.644455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.644689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.644755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.645005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.645037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.645216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.645265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 20:24:22.645460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 20:24:22.645523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:19.181 [2024-07-24 20:24:22.645720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.181 [2024-07-24 20:24:22.645753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.181 qpair failed and we were unable to recover it. 00:30:19.181 [2024-07-24 20:24:22.645954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.181 [2024-07-24 20:24:22.645987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.181 qpair failed and we were unable to recover it. 00:30:19.181 [2024-07-24 20:24:22.646196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.181 [2024-07-24 20:24:22.646228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.181 qpair failed and we were unable to recover it. 00:30:19.181 [2024-07-24 20:24:22.646385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.181 [2024-07-24 20:24:22.646417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.181 qpair failed and we were unable to recover it. 00:30:19.181 [2024-07-24 20:24:22.646609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.181 [2024-07-24 20:24:22.646641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.181 qpair failed and we were unable to recover it. 00:30:19.181 [2024-07-24 20:24:22.646811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.181 [2024-07-24 20:24:22.646844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.181 qpair failed and we were unable to recover it. 00:30:19.181 [2024-07-24 20:24:22.647041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.181 [2024-07-24 20:24:22.647073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.181 qpair failed and we were unable to recover it. 00:30:19.181 [2024-07-24 20:24:22.647260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.181 [2024-07-24 20:24:22.647293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.181 qpair failed and we were unable to recover it. 00:30:19.181 [2024-07-24 20:24:22.647495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.647528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.647728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.647760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.647935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.647968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.648156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.648188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.648386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.648418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.648639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.648672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.648891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.648925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.649115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.649147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.649339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.649414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.649591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.649623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.649821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.649855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.650080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.650142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.650336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.650368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.650635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.650670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.650877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.650911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.651099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.651160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.651420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.651512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.651722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.651795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.652062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.652096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.652299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.652360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.652613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.652648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.652867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.652930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.653187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.653221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.653392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.653470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.653700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.653773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.654032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.654094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.654383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.654475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.654717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.654779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.655010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.655072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.655300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.655361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.655676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.655710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.655945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.656008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.656276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.656338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.656606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.656641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.656848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.656882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.657107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.657170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.657444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.657500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.657703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.657773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.658020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.658054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.658235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.658298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.182 qpair failed and we were unable to recover it. 00:30:19.182 [2024-07-24 20:24:22.658558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.182 [2024-07-24 20:24:22.658593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.658809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.658871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.659132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.659166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.659411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.659497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.659654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.659688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.659919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.659981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.660256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.660290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.660514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.660549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.660739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.660802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.661071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.661133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.661386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.661475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.661718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.661780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.662033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.662067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.662241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.662316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.662586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.662635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.662839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.662871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.663044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.663106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.663386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.663462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.663709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.663743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.663956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.664018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.664241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.664303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.664561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.664596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.664788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.664822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.665067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.665129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.665348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.665410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.665666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.665700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.665979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.666013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.666215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.666278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.666539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.666573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.666723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.666796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.667024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.667059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.667259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.667321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.667595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.667629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.667862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.667924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.668173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.668207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.668442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.668518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.668744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.668806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.669039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.669102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.183 [2024-07-24 20:24:22.669337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.183 [2024-07-24 20:24:22.669372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.183 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.669566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.669601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.669793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.669856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.670084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.670147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.670402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.670443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.670629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.670663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.670939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.671013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.671285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.671346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.671587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.671622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.671840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.671902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.672136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.672198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.672461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.672526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.672700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.672735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.672946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.673008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.673268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.673330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.673614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.673648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.673847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.673882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.674065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.674127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.674386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.674464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.674682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.674734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.675011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.675045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.675235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.675298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.675567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.675602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.675811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.675873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.676150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.676185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.676438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.676495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.676669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.676703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.676974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.677036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.677313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.677347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.677596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.677630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.677824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.677886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.678132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.678195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.678457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.678492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.678707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.678779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.679039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.679101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.679386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.679466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.679664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.679698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.679906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.679969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.680226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.184 [2024-07-24 20:24:22.680288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.184 qpair failed and we were unable to recover it. 00:30:19.184 [2024-07-24 20:24:22.680523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.680558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.680725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.680759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.680947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.681008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.681205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.681267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.681511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.681546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.681721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.681755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.681967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.682029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.682259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.682322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.682619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.682654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.682836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.682870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.683072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.683135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.683407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.683483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.683764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.683827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.684070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.684104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.684292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.684354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.684596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.684631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.684840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.684902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.685155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.685189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.685369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.685447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.685695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.685762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.685989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.686051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.686298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.686360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.686645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.686679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.686869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.686931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.687185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.687247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.687519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.687553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.687743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.687806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.688070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.688132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.688361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.688424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.688705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.688740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.688952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.689015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.689250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.689313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.689571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.689605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.689799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.689833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.690014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.690075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.690300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.690372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.690628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.690662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.690833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.185 [2024-07-24 20:24:22.690867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.185 qpair failed and we were unable to recover it. 00:30:19.185 [2024-07-24 20:24:22.691047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.691111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.691371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.691448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.691662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.691696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.691939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.691974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.692156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.692218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.692458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.692518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.692689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.692723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.693007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.693041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.693216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.693278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.693527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.693561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.693755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.693817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.694042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.694075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.694282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.694344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.694636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.694669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.694877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.694910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.695110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.695142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.695322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.695354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.695553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.695603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.695820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.695852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.696035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.696069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.696256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.696318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.696574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.696609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.696799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.696862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.697138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.697172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.697398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.697487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.697672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.697706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.186 qpair failed and we were unable to recover it. 00:30:19.186 [2024-07-24 20:24:22.697939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.186 [2024-07-24 20:24:22.698002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.698277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.698311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.698569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.698604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.698829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.698891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.699128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.699190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.699472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.699505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.699696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.699758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.700043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.700105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.700346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.700408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.700684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.700719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.700919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.700981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.701216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.701278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.701523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.701558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.701772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.701806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.702078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.702140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.702396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.702470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.702688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.702747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.703011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.703046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.703245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.703307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.703564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.703599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.703797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.703861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.704134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.704167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.704367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.704442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.704672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.704706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.705003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.705065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.705305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.705339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.705550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.705585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.705800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.705863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.706100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.706162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.706424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.706464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.706642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.706702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.706960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.707022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.707252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.707314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.707554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.707588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.707787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.707849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.708133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.708195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.708468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.708530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.708669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.708703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.187 qpair failed and we were unable to recover it. 00:30:19.187 [2024-07-24 20:24:22.708934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.187 [2024-07-24 20:24:22.708996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.709263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.709335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.709608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.709642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.709786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.709820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.710039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.710101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.710334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.710397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.710668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.710702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.710976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.711010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.711223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.711285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.711554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.711589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.711808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.711870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.712115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.712149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.712364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.712426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.712679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.712713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.712980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.713043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.713321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.713355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.713560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.713595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.713824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.713886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.714134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.714196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.714447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.714482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.714708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.714771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.715029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.715091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.715367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.715444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.715695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.715730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.715954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.716016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.716272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.716335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.716587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.716623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.716838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.716872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.717075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.717137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.717405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.717485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.717713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.717787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.718065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.718099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.718266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.718328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.718609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.718644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.718865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.718928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.719170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.719204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.719423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.719524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.719746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.719809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.188 qpair failed and we were unable to recover it. 00:30:19.188 [2024-07-24 20:24:22.720038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.188 [2024-07-24 20:24:22.720100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.720362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.720425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.720614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.720649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.720861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.720923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.721196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.721258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.721504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.721539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.721710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.721773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.722043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.722105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.722343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.722405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.722674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.722708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.722908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.722971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.723238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.723300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.723552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.723587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.723762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.723796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.723950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.724012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.724270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.724333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.724591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.724626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.724826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.724860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.725116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.725179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.725451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.725521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.725697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.725756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.725970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.726005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.726195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.726257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.726517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.726552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.726749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.726813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.727083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.727118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.727347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.727410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.727670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.727704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.727935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.727998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.728235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.728268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.728461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.728524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.728697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.728757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.729019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.729081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.729331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.729394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.729621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.729655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.729848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.729911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.730177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.730240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.730520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.730555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.730757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.189 [2024-07-24 20:24:22.730820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.189 qpair failed and we were unable to recover it. 00:30:19.189 [2024-07-24 20:24:22.731089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.731151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.731406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.731481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.731692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.731727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.731948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.732010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.732265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.732327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.732580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.732615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.732827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.732862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.733041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.733103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.733310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.733372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.733621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.733656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.733858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.733892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.734067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.734130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.734388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.734465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.734707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.734779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.735012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.735046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.735242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.735304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.735557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.735591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.735784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.735846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.736081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.736115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.736307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.736370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.736659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.736694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.736901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.736964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.737214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.737248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.737399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.737482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.737686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.737720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.738001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.738064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.738273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.738307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.738492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.738556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.738817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.738879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.739120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.739182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.739449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.739485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.739670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.739733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.740012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.740074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.740352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.740414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.740685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.740719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.740946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.741009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.741277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.741340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.741621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.741656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.190 qpair failed and we were unable to recover it. 00:30:19.190 [2024-07-24 20:24:22.741794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.190 [2024-07-24 20:24:22.741828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.742025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.742087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.742319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.742382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.742619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.742653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.742830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.742864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.743068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.743130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.743362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.743424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.743657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.743691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.743963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.743997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.744249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.744311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.744533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.744567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.744731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.744794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.745019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.745053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.745242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.745305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.745565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.745600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.745789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.745858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.746111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.746146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.746324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.746386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.746658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.746692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.746957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.747021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.747280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.747316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.747525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.747560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.747772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.747847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.748086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.748150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.748385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.748439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.748652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.748687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.748896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.748931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.749146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.749209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.749477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.749514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.749717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.749754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.749954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.750018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.750252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.750318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.750576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.750612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.191 qpair failed and we were unable to recover it. 00:30:19.191 [2024-07-24 20:24:22.750799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.191 [2024-07-24 20:24:22.750841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.751079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.751142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.751402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.751495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.751719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.751754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.751937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.751973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.752185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.752249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.752514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.752551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.752745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.752780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.752983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.753018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.753252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.753315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.753615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.753652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.753864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.753906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.754092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.754154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.754371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.754464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.754730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.754765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.754953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.754988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.755209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.755271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.755532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.755573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.755759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.755793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.755969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.756005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.756194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.756257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.756520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.756556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.756752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.756786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.756991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.757027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.757259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.757322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.757587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.757623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.757813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.757847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.758020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.758056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.758253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.758316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.758615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.758651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.758864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.758913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.759106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.759141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.759330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.759400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.759671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.759707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.759908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.759965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.760235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.760270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.760493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.760529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.192 qpair failed and we were unable to recover it. 00:30:19.192 [2024-07-24 20:24:22.760738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.192 [2024-07-24 20:24:22.760776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.760985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.761048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.761309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.761378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.761660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.761697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.761886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.761943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.762179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.762213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.762419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.762466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.762681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.762724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.762945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.763008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.763270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.763334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.763641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.763677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.763900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.763965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.764228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.764290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.764518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.764554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.764750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.764785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.764966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.765001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.765211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.765274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.765544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.765585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.765766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.765800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.766002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.766037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.766310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.766382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.766643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.766679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.766888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.766952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.767241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.767276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.767450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.767485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.767708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.767744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.767907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.767950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.768149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.768212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.768495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.768530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.768764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.768799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.768927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.768968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.769158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.769220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.769481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.769539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.769728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.769764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.769936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.769972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.770194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.770257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.770529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.770565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.770772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.770807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.771011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.771047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.771234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.771297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.193 [2024-07-24 20:24:22.771518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.193 [2024-07-24 20:24:22.771556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.193 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.771746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.771781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.771954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.771989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.772217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.772280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.772526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.772565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.772739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.772774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.772977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.773013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.773181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.773244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.773520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.773557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.773768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.773803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.773969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.774004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.774164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.774227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.774470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.774531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.774717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.774752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.774926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.774962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.775148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.775211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.775460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.775544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.775770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.775805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.776007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.776042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.776247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.776310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.776575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.776613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.776822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.776865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.777053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.777089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.777300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.777364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.777665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.777701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.777886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.777923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.778149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.778184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.778389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.778485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.778703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.778738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.778911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.778946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.779179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.779214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.779424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.779504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.779709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.779743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.779940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.780005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.780229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.780293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.780556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.780597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.780785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.780848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.781120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.781184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.781449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.781485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.781669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.781705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.781891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.194 [2024-07-24 20:24:22.781925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.194 qpair failed and we were unable to recover it. 00:30:19.194 [2024-07-24 20:24:22.782146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.782225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.782494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.782534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.782730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.782765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.782970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.783006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.783218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.783282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.783531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.783567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.783749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.783784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.783968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.784010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.784275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.784338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.784649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.784686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.784896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.784931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.785132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.785195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.785470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.785536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.785754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.785789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.785970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.786006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.786221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.786284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.786523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.786567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.786757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.786792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.786994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.787030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.787311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.787374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.787664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.787702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.787889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.787928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.788168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.788231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.788517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.788554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.788700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.788736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.788908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.788952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.789115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.789178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.789462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.789522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.789757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.789792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.789957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.789993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.790190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.790253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.790482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.790548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.790772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.790806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.195 qpair failed and we were unable to recover it. 00:30:19.195 [2024-07-24 20:24:22.790978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.195 [2024-07-24 20:24:22.791014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.791238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.791302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.791605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.791641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.791843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.791886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.792115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.792150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.792392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.792505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.792697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.792731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.792956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.793021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.793281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.793320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.793511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.793547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.793726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.793765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.793971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.794035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.794272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.794308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.794524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.794560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.794766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.794802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.794998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.795071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.795348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.795384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.795600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.795634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.795796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.795831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.796101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.796164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.796449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.796485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.796700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.796735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.796958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.797022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.797229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.797305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.797586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.797623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.797824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.797859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.798103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.798167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.798396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.798491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.798689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.798724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.798940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.798995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.799259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.799322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.799625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.799661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.799864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.799905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.800129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.800193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.800512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.800549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.800787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.800822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.801025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.801061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.801296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.801359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.801654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.801691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.801857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.801894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.802094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.802129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.802367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.802469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.802711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.802751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.802916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.802952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.803191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.196 [2024-07-24 20:24:22.803226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.196 qpair failed and we were unable to recover it. 00:30:19.196 [2024-07-24 20:24:22.803477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.803531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.803728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.803762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.803950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.804015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.804278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.804316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.804531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.804567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.804748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.804790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.804990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.805052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.805315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.805358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.805572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.805607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.805781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.805817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.805968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.806031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.806304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.806339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.806520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.806556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.806697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.806732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.806948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.807011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.807284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.807320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.807486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.807522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.807733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.807769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.807995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.808058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.808323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.808359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.808538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.808574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.808801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.808867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.809126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.809198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.809490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.809526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.809716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.809753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.810033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.810097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.810358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.810448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.810686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.810722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.810880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.810916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.811133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.811197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.811465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.811524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.811718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.811753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.811927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.811963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.812163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.812226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.812478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.812536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.812757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.812793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.812967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.813002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.813199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.813263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.813471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.813538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.813731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.813766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.813944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.813980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.814213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.814276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.814529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.814570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.814778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.814813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.197 qpair failed and we were unable to recover it. 00:30:19.197 [2024-07-24 20:24:22.815018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.197 [2024-07-24 20:24:22.815053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.815301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.815364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.815659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.815695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.815907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.815943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.816157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.816220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.816502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.816558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.816799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.816834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.816987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.817023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.817256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.817320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.817610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.817647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.817859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.817894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.818109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.818144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.818409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.818492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.818712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.818747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.818932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.818968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.819203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.819238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.819416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.819466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.819646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.819681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.819898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.819977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.820247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.820287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.820512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.820548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.820720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.820756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.820979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.821042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.821309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.821345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.821546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.821582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.821796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.821856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.822119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.822187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.822475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.822511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.822698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.822733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.822919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.822982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.823224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.823299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.823569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.823605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.823784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.823822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.824037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.824100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.824343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.824417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.824725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.824782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.824971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.825007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.825222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.825277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.825492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.825529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.825723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.825758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.825989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.826046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.826240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.826295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.826512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.826548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.826764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.826833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.827107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.198 [2024-07-24 20:24:22.827169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.198 qpair failed and we were unable to recover it. 00:30:19.198 [2024-07-24 20:24:22.827383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.827480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.827698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.827733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.827935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.828010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.828277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.828340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.828615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.828659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.828837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.828871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.829107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.829180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.829479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.829514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.829684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.829720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.829962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.829997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.830167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.830233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.830500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.830535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.830722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.830786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.831076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.831140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.831388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.831480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.831699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.831750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.832014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.832079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.832336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.832415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.832654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.832697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.832940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.833003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.833236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.833313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.833620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.833656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.833869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.833905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.834141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.834204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.834464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.834501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.834708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.834742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.834916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.834952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.835170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.835234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.835482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.835522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.835755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.835790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.835997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.836063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.836280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.836343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.836625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.836662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.836796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.836831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.837014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.837049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.837202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.837266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.199 [2024-07-24 20:24:22.837517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.199 [2024-07-24 20:24:22.837555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.199 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.837724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.837760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.837959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.837994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.838180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.838243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.838491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.838546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.838793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.838828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.839031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.839067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.839274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.839337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.839637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.839679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.839885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.839929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.840140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.840175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.840484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.840527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.840714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.840748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.840966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.841033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.841276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.841314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.841464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.841501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.841707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.841743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.841956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.842021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.842281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.842318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.842560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.842596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.842750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.842785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.842995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.843030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.843269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.843323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.843552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.843590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.843804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.843839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.844027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.844080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.844286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.844341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.844473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.844508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.844686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.844721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.844864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.844898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.845105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.845159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.845356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.845391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.845600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.845635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.845832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.845887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.846110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.846166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.846347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.846388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.846615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.846673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.846877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.846930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.847157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.847209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.847384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.847418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.847647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.847704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.847919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.847972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.848156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.848210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.848410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.848454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.848680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 20:24:22.848737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 20:24:22.848918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.848971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.849186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.849239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.849446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.849481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.849707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.849764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.849969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.850022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.850241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.850298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.850499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.850534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.850752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.850808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.851018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.851072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.851274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.851309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.851454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.851490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.851719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.851776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.851952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.852008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.852248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.852302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.852471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.852505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.852731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.852787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.853002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.853059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.853234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.853268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.853468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.853520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.853714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.853772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.853977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.854031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.854244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.854278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.854453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.854488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.854681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.854742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.854975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.855031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.855220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.855254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.855457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.855491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.855713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.855769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.855990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.856044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.856255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.856289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.856468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.856509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.856731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.856786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.857004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.857058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.857264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.857298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.857499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.857560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.857742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.857797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.858017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.858071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.858284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.858318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.858507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.858565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.858792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.858849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.859066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.859119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.859377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.859412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.859653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.859716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.859941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.859996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.860266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.860319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 20:24:22.860550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 20:24:22.860606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.860839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.860894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.861160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.861214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.861411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.861454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.861673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.861729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.861951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.862005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.862183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.862218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.862438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.862472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.862680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.862714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.862873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.862929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.863151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.863204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.863404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.863446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.863657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.863691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.863909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.863962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.864182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.864236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.864460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.864495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.864674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.864730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.864923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.864976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.865174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.865229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.865409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.865451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.865654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.865689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.865906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.865959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.866152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.866207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.866410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.866451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.866676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.866736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.866952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.867013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.867235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.867290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.867496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.867531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.867752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.867806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.867990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.868044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.868272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.868327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.868596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.868658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.868959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.869015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.869268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.869322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.869560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.869616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.869810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.869865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.870087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.870140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.870319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.870353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.870550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.870606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.870794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.870849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.871042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.871098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.871304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.871338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.871644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.871706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.871948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.872004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.872234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.872289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.872459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.872494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.872680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.872738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 20:24:22.872922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 20:24:22.872976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.873183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.873239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.873444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.873479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.873687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.873746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.873929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.873982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.874180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.874236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.874442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.874478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.874687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.874744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.875002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.875057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.875277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.875330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.875506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.875541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.875760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.875815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.876035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.876091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.876269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.876304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.876504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.876566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.876754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.876809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.877021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.877075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.877280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.877315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.877510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.877575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.877808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.877862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.878094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.878148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.878356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.878391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.878625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.878680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.878909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.878964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.879160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.879214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.879390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.879425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.879644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.879702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.879882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.879933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.880132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.880186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.880396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.880445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.880639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.880695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.880887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.880948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.881117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.881171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.881444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.881479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.881702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.881757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.881977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.882031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.882247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.882301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.882476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.882511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.882744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.882800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.883018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.883073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.883243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.883277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 20:24:22.883501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 20:24:22.883556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.883747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.883803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.884017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.884071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.884274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.884308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.884503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.884561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.884784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.884839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.885057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.885113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.885295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.885332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.885534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.885587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.885745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.885799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.886003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.886059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.886228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.886262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.886483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.886518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.886735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.886788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.887009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.887064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.887232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.887266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.887480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.887537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.887773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.887834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.888031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.888084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.888293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.888328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.888504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.888566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.888787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.888842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.889048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.889103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.889310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.889345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.889547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.889602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.889814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.889867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.890048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.890104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.890317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.890351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.890555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.890611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.890839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.890893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.891084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.891139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.891332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.891366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.891593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.891649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.891873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.891929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.892121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.892175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.892336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.892370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.892592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.892648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.892875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.892932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.893163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.893223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.893399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.893440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.893588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.893644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.893863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.893919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.894102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.894156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.894336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.894370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.894604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.894660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 20:24:22.894812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 20:24:22.894868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.895064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.895120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.895303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.895338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.895504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.895565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.895796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.895850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.896069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.896123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.896339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.896374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.896562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.896619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.896829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.896886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.897103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.897159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.897336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.897371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.897543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.897598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.897820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.897877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.898094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.898149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.898330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.898365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.898560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.898618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.898805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.898861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.899086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.899142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.899320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.899354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.899540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.899601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.899826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.899879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.900107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.900161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.900344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.900379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.900581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.900637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.900804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.900857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.901073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.901128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.901346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.901380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.901591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.901649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.901838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.901894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.902099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.902153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.902357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.902392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.902620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.902677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.902869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.902924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.903194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.903247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.903416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.903460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.903666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.903731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.903924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.903978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.904172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.904227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.904447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.904483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.904677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.904739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.904959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.905014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.905190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.905245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.905457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.905492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.905673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.905729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.905956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.906014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.906229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.906285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.906493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.906528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.906723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.906779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.906989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 20:24:22.907044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 20:24:22.907250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.907285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.907455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.907490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.907719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.907774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.907999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.908054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.908264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.908299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.908509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.908567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.908794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.908850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.909052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.909107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.909291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.909325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.909593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.909649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.909876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.909931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.910150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.910206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.910380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.910415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.910618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.910673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.910862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.910917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.911135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.911189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.911364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.911398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.911638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.911707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.911910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.911967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.912187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.912242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.912412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.912464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.912643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.912699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.912929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.912985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.913167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.913223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.913437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.913473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.913660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.913723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.913946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.914000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.914222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.914276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.914450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.914487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.914675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.914734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.914966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.915027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.915253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.915311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.915461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.915496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.915694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.915754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.915967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.916023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.916213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.916248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.916423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.916465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.916656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.916720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.916919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.916975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.917164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.917221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.917392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.917440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.917661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.917722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.917952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.918007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.918192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.918248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.918446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.918482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.918697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.918753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.918983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.919037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.919263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.919319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.919534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.919570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.919796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.919850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 20:24:22.920049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 20:24:22.920103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.920306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.920341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.920530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.920585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.920805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.920860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.921098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.921154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.921338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.921373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.921609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.921663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.921876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.921931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.922120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.922173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.922349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.922384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.922572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.922629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.922826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.922880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.923144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.923200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.923393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.923436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.923698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.923754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.923994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.924050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.924287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.924342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.924524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.924559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.924745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.924801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.924997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.925051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.925268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.925308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.925579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.925634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.925852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.925907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.926097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.926152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.926369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.926405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.926599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.926654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.926873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.926927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.927149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.927205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.927414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.927457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.927620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.927676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.927898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.927952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.928170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.928225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.928445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.928480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.928670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.928704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.928938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.928993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.929178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.929235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.929426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.929470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.929681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.929715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.929942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.929999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.930185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.930239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.930447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.930482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.930655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.930708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.930889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.930945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.931161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.931215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.931397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.931438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 20:24:22.931676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 20:24:22.931735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.931928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.931984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.932211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.932268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.932449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.932485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.932718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.932773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.933000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.933055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.933236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.933292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.933579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.933635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.933852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.933908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.934134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.934190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.934391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.934426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.934657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.934718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.934901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.934954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.935136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.935191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.935340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.935375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.935604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.935666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.935867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.935922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.936107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.936162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.936377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.936411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.936649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.936710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.936892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.936948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.937179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.937235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.937452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.937488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.937718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.937773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.938014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.938069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.938303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.938358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.938538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.938573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.938799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.938855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.939062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.939118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.939298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.939333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.939509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.939569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.939806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.939861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.940048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.940104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.940309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.940344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.940538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.940591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.940814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.940869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.941083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.941138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.941310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.941344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.941531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.941587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.941785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.941842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.942026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.942079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.942288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.942322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.942539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.942596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.942804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.942859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.943081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.943137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.943331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.943365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.943594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.943649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.943843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.943898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.944118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.944173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.944332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.944366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.944585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.944641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.944794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.944850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.945062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.945121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 20:24:22.945323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 20:24:22.945358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 20:24:22.945515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 20:24:22.945578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 20:24:22.945764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 20:24:22.945823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 20:24:22.945981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 20:24:22.946037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 20:24:22.946252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 20:24:22.946287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 20:24:22.946478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 20:24:22.946512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 20:24:22.946715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 20:24:22.946748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 20:24:22.946949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 20:24:22.946982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 20:24:22.947152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 20:24:22.947185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 20:24:22.947368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 20:24:22.947417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 20:24:22.947596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 20:24:22.947645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 20:24:22.947842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 20:24:22.947898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 20:24:22.948112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 20:24:22.948145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 20:24:22.948358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 20:24:22.948392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 20:24:22.948630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 20:24:22.948664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 20:24:22.948871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 20:24:22.948905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 20:24:22.949094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 20:24:22.949127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 20:24:22.949335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 20:24:22.949369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 20:24:22.949605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 20:24:22.949661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 20:24:22.949884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 20:24:22.949917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.950174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.950207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.950408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.950449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.950643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.950677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.950851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.950884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.951058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.951091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.951347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.951380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.951558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.951593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.951800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.951833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.952021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.952054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.952242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.952275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.952479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.952513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.952684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.952716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.952911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.952943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.953157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.953189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.953344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.953377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.953570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.953604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.953809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.953842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.953975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.954023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.954241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.954276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.954516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.954550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.954775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.954830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.955067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.955121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.955313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.955353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.955580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.955637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.955833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.955888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.956082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.956138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.956343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.956378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.956614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.956670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.956883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.956937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.957164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.957220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.957422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.957464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.957656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.957711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.957893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.957946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.485 [2024-07-24 20:24:22.958127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.485 [2024-07-24 20:24:22.958180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.485 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.958383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.958417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.958652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.958707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.958932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.958985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.959173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.959229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.959449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.959485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.959662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.959717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.959926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.959982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.960202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.960258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.960485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.960521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.960745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.960801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.960984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.961036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.961243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.961277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.961503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.961563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.961753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.961807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.962018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.962052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.962326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.962361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.962569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.962604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.962836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.962891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.963119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.963175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.963347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.963381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.963618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.963675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.963889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.963943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.964135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.964189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.964459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.964495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.964690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.964753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.964974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.965028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.965183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.965237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.965452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.965487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.965625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.965705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.965943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.965997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.966202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.966256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.966468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.966503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.966700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.966758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.966944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.967000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.967167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.967202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.486 [2024-07-24 20:24:22.967371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.486 [2024-07-24 20:24:22.967405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.486 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.967612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.967667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.967867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.967921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.968138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.968194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.968372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.968407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.968645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.968699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.968897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.968953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.969154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.969210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.969415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.969460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.969686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.969740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.969964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.970019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.970204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.970257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.970488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.970554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.970754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.970810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.971021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.971076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.971281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.971338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.971531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.971566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.971761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.971816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.972043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.972097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.972269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.972304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.972522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.972577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.972794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.972851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.973006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.973060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.973232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.973267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.973457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.973492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.973717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.973772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.973953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.974008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.974166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.974200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.974390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.974424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.974641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.974709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.974937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.974992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.975179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.975236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.975444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.975480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.975645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.975707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.975899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.975956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.976182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.976236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.487 [2024-07-24 20:24:22.976407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.487 [2024-07-24 20:24:22.976456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.487 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.976641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.976700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.976878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.976933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.977162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.977217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.977421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.977464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.977624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.977659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.977877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.977941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.978170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.978233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.978443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.978479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.978627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.978662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.978904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.978958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.979137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.979195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.979371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.979405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.979588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.979624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.979811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.979865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.980046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.980102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.980378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.980412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.980593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.980628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.980827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.980882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.981112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.981164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.981310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.981344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.981571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.981627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.981829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.981884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.982122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.982176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.982388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.982423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.982626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.982681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.982850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.982903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.983120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.983177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.983337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.983371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.983565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.983622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.983814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.983868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.984096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.984152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.984340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.984375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.984560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.984619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.984828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.984883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.985064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.985121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.985322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.985356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.488 qpair failed and we were unable to recover it. 00:30:19.488 [2024-07-24 20:24:22.985528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.488 [2024-07-24 20:24:22.985590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.985812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.985867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.986054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.986107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.986273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.986308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.986455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.986491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.986690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.986751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.986949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.987005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.987204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.987239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.987448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.987483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.987649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.987709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.987923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.987978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.988163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.988217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.988405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.988446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.988644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.988721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.988924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.988978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.989165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.989222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.989412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.989468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.989629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.989685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.989915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.989971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.990195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.990250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.990497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.990531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.990742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.990823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.991017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.991071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.991278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.991314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.991498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.991564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.991760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.991816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.489 [2024-07-24 20:24:22.991978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.489 [2024-07-24 20:24:22.992034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.489 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.992219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.992255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.992492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.992547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.992759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.992815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.993039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.993095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.993265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.993299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.993497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.993560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.993751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.993806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.994000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.994056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.994241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.994276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.994453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.994489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.994646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.994709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.994918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.994973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.995192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.995227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.995533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.995571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.995793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.995872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.996056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.996090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.996261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.996294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.996467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.996502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.996652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.996685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.996897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.996932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.997116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.997148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.998116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.998156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.998353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.998392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.998577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.998635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.998866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.998924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.999106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.999162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.999366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.999401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.999615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.999671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:22.999878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:22.999933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:23.000130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:23.000187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:23.000394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:23.000443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:23.000635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:23.000692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:23.000879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:23.000941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:23.001169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.490 [2024-07-24 20:24:23.001227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.490 qpair failed and we were unable to recover it. 00:30:19.490 [2024-07-24 20:24:23.001398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.001467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.001661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.001727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.001955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.002014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.002222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.002278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.002484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.002522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.002750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.002809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.003047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.003106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.003291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.003325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.003516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.003575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.003780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.003840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.004046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.004106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.004300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.004335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.004542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.004605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.004788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.004841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.005073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.005133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.005309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.005344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.005541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.005599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.005814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.005880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.006047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.006108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.006302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.006343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.006545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.006606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.006841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.006900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.007087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.007153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.007331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.007367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.007571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.007633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.007844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.007901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.008095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.008153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.008375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.008412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.008616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.008677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.008887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.008949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.009151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.009207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.009461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.009499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.009672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.009729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.009978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.010038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.010248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.010316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.010551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.010620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.010851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.010920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.491 qpair failed and we were unable to recover it. 00:30:19.491 [2024-07-24 20:24:23.011177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.491 [2024-07-24 20:24:23.011234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.011419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.011470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.011710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.011745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.011968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.012034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.012264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.012319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.012533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.012569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.012790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.012855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.013098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.013155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.013335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.013376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.013553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.013622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.013816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.013887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.014087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.014148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.014352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.014397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.014677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.014802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.015164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.015260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.015618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.015693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.015954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.016025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.016322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.016387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.016600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.016635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.016828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.016874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.017114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.017194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.017480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.017530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.017730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.017836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.018164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.018251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.018570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.018620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.018860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.018908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.019123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.019171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.019340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.019388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.019603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.019654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.019910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.019977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.020217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.020287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.020509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.020560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.020785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.020848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.021117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.021183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.021399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.021450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.492 qpair failed and we were unable to recover it. 00:30:19.492 [2024-07-24 20:24:23.021612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.492 [2024-07-24 20:24:23.021649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.021874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.021933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.022122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.022188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.022359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.022394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.022588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.022626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.022845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.022904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.023102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.023165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.023338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.023373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.023582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.023643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.023799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.023855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.024062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.024127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.024343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.024378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.024563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.024621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.024861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.024918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.025126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.025186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.025379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.025415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.025638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.025715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.025943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.026012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.026188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.026245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.026461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.026498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.026661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.026729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.028039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.028079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.028262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.028298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.028496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.028534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.028700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.028736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.028922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.028957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.029161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.029197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.029385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.029420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.029625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.029661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.029836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.029893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.030088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.030145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.030362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.030399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.493 qpair failed and we were unable to recover it. 00:30:19.493 [2024-07-24 20:24:23.030584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.493 [2024-07-24 20:24:23.030639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.030851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.030911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.031145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.031209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.031394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.031438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.031616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.031673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.031857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.031920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.032139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.032193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.032378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.032413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.032603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.032670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.032866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.032927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.033130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.033186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.033375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.033412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.033603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.033661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.033839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.033900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.034115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.034169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.034358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.034396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.034570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.034626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.034854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.034919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.035136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.035193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.035362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.035396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.035582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.035619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.035787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.035848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.036052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.036117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.036286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.036320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.036494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.036531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.036757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.036822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.037008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.037070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.037259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.037293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.037466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.037502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.037645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.037712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.494 [2024-07-24 20:24:23.037875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.494 [2024-07-24 20:24:23.037932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.494 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.038092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.038125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.038286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.038321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.038495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.038567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.038720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.038780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.038944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.038978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.039143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.039180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.039344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.039379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.039545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.039604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.039804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.039859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.040037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.040094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.040281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.040318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.040498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.040533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.040674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.040709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.040924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.040958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.041149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.041184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.041356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.041391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.041605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.041666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.041884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.041946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.042185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.042248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.042451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.042486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.042659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.042726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.042916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.042971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.043205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.043263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.043453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.043508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.043678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.043733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.043960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.044016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.044207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.044272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.044452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.044487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.045350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.045390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.045580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.045637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.045846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.045920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.046077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.046121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.046328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.046363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.046548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.046604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.495 [2024-07-24 20:24:23.046794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.495 [2024-07-24 20:24:23.046853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.495 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.047092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.047149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.047333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.047367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.047521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.047577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.047781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.047855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.048075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.048131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.048311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.048345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.048538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.048597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.048836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.048892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.049126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.049163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.049342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.049380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.049613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.049673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.049883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.049942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.050171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.050230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.050442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.050477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.050647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.050706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.050879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.050933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.051132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.051190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.051341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.051378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.051556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.051610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.051815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.051849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.052077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.052141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.052316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.052354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.052530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.052586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.052829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.052881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.053092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.053152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.053290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.053325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.053515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.053575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.053745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.053801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.053997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.054057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.054238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.054274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.054490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.054557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.054720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.054777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.054960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.055022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.055235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.055272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.055504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.496 [2024-07-24 20:24:23.055565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.496 qpair failed and we were unable to recover it. 00:30:19.496 [2024-07-24 20:24:23.055720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.055778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.055982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.056056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.056249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.056285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.056457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.056493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.056630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.056702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.056861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.056916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.057088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.057124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.057264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.057299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.057445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.057481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.057636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.057696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.057868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.057928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.058129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.058166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.058344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.058380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.058544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.058580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.058767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.058805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.059012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.059049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.059193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.059229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.059376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.059411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.059637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.059699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.059898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.059956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.060132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.060187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.060378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.060414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.060577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.060642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.060858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.060894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.061086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.061124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.061313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.061347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.061540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.061601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.061792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.061854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.062042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.062097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.062260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.062295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.062537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.062603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.062839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.062912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.063141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.063199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-07-24 20:24:23.063387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.497 [2024-07-24 20:24:23.063421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.063591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.063651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.063880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.063916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.064144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.064203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.064444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.064494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.064641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.064677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.064906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.064961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.065193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.065249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.065438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.065493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.065652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.065690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.065862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.065920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.066159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.066216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.066404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.066460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.066622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.066657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.066862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.066896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.067087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.067143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.067327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.067363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.067541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.067579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.067763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.067818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.068018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.068073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.068258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.068295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.068524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.068582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.068800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.068856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.069002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.069057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.069258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.069296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.069523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.069583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.069782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.069816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.069955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.070016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.070239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.070276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.070493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.070554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.070706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.070762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.070926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.070960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.071118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.071153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.071331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.071366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.071526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.071561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.071711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.071745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.071883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.071917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.072090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.498 [2024-07-24 20:24:23.072124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-07-24 20:24:23.072299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.072333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.072510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.072567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.072724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.072792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.072934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.072969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.073142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.073177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.073386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.073420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.073591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.073648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.073811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.073865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.074068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.074103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.074302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.074337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.074541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.074604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.074791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.074849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.075065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.075119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.075261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.075295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.075497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.075556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.075743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.075805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.076042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.076105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.076317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.076353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.076522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.076577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.076749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.076806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.076952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.077013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.077194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.077229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.077420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.077461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.077620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.077675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.077854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.077914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.078113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.078168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.078369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.078403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.078589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.078624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-24 20:24:23.078809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.499 [2024-07-24 20:24:23.078844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.079007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.079066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.079270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.079305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.079496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.079535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.079738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.079793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.080034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.080073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.080285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.080323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.080526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.080564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.080710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.080746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.080962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.080998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.081178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.081214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.081368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.081402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.081603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.081639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.081861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.081922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.082145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.082204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.082389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.082424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.082632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.082666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.082891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.082948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.083146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.083205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.083413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.083466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.083642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.083676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.083868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.083923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.084113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.084175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.084366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.084402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.084590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.084625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.084804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.084860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.085078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.085136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.085347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.085382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.085545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.085600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.085840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.085898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.086100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.086156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.086294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.086329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.086516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.086578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.086774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.086837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.087046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.087110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.087329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.500 [2024-07-24 20:24:23.087364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-24 20:24:23.087556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.087613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.087808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.087862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.088089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.088149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.088325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.088359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.088573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.088631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.088860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.088919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.089146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.089210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.089410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.089452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.089614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.089669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.089880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.089937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.090122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.090183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.090388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.090422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.090654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.090717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.090944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.091002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.091200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.091259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.091466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.091501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.091668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.091724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.091908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.091963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.092138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.092192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.092374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.092408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.092585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.092638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.092832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.092887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.093103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.093166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.093380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.093415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.093623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.093678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.093859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.093915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.094108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.094168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.094379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.094417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.094600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.094654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.094869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.094925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.095210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.095268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.095507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.095564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.095790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.095845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.096079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.096134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.096286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.096322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 20:24:23.096539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.501 [2024-07-24 20:24:23.096601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.096841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.096901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.097120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.097177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.097357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.097391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.097587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.097644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.097868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.097922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.098126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.098180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.098366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.098401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.098575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.098631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.098792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.098846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.099044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.099099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.099280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.099315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.099490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.099570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.099749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.099804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.099987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.100040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.100204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.100238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.100452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.100487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.100642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.100699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.100883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.100940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.101126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.101182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.101386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.101420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.101590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.101646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.101813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.101868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.102052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.102108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.102301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.102335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.102552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.102606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.102832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.102888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.103083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.103138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.103313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.103347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.103534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.103590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.103765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.103820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.104014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.104068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.104236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.104270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.104445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.104500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.104664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.104727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.104885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.502 [2024-07-24 20:24:23.104939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 20:24:23.105172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.105227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.105401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.105441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.105630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.105685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.105909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.105963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.106179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.106234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.106438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.106473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.106628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.106663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.106845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.106901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.107126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.107181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.107387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.107421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.107575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.107610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.107796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.107850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.108032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.108087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.108293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.108327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.108498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.108563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.108756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.108812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.109012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.109065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.109277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.109312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.109453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.109488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.109653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.109710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.109901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.109954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.110153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.110208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.110376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.110415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.110587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.110641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.110837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.110891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.111075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.111129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.111290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.111324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.111510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.111569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.111768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.111824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.112059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.112116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.112256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.112292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.112468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.112503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.112691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.112746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.112960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.113015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.113194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.113228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.113411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.113452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 20:24:23.113644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.503 [2024-07-24 20:24:23.113712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.113934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.113989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.114207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.114261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.114456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.114511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.114709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.114765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.114984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.115040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.115217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.115251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.115438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.115473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.115648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.115709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.115896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.115951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.116136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.116189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.116392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.116426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.116601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.116655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.116849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.116905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.117092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.117146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.117294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.117327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.117511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.117573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.117766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.117819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.117985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.118041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.118212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.118247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.118419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.118462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.118635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.118694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.118906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.118959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.119155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.119211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.119419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.119459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.119634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.119688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.119911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.119971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.120178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.120232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.120420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.120474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.120648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.120710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.120889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.504 [2024-07-24 20:24:23.120943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.504 qpair failed and we were unable to recover it. 00:30:19.504 [2024-07-24 20:24:23.121123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.121180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.121388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.121422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.121606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.121663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.121877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.121933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.122126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.122182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.122364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.122398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.122581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.122634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.122856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.122910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.123072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.123128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.123275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.123309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.123509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.123544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.123679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.123713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.123923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.123958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.124146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.124181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.124338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.124372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.124544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.124579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.124754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.124787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.124933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.124967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.125165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.125200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.125371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.125405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.125588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.125641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.125828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.125878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.126056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.126111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.126330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.126364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.126521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.126578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.126803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.126856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.127054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.127110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.127310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.127345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.127547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.127603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.127835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.127889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.128106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.128162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.128313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.128347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.128509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.128571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.128751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.128806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.128971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.505 [2024-07-24 20:24:23.129026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.505 qpair failed and we were unable to recover it. 00:30:19.505 [2024-07-24 20:24:23.129205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.129244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.129444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.129479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.129637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.129695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.129913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.129969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.130188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.130243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.130414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.130455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.130625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.130679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.130865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.130918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.131132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.131186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.131390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.131425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.131579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.131613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.131802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.131857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.132076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.132132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.132308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.132342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.132537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.132596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.132832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.132886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.133099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.133153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.133328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.133362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.133551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.133606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.133803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.133856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.134000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.134057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.134266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.134301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.134441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.134476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.134672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.134732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.134918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.134973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.135191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.135246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.135421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.135462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.135645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.135715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.135890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.135925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.136128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.136182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.136382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.136415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.136618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.136671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.136872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.136928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.137146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.137199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.137371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.137405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.137599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.506 [2024-07-24 20:24:23.137652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.506 qpair failed and we were unable to recover it. 00:30:19.506 [2024-07-24 20:24:23.137870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.137926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.138120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.138174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.138314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.138348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.138535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.138590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.138779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.138839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.139037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.139093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.139294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.139329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.139530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.139586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.139812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.139874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.140056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.140106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.140309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.140343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.140553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.140608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.140811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.140869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.141096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.141153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.141310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.141344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.141541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.141599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.141737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.141793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.142019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.142076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.142281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.142316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.142488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.142553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.142801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.142856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.143086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.143141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.143330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.143364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.143518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.143579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.143819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.143873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.144090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.144146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.144287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.144322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.144558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.144613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.144813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.144868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.145088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.145144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.145344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.145377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.145590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.145647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.145875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.145926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.146139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.146194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.146371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.146406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.146613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.146672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.507 [2024-07-24 20:24:23.146863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.507 [2024-07-24 20:24:23.146920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.507 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.147110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.147163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.147381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.147415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.147606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.147663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.147843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.147899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.148120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.148173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.148359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.148395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.148625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.148689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.148875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.148933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.149152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.149207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.149409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.149459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.149653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.149723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.149956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.150010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.150235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.150290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.150529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.150587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.150790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.150846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.151034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.151088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.151267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.151302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.151530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.151585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.151789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.151840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.152049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.152083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.152250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.152284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.152540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.152597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.152813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.152847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.153050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.153084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.153280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.153315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.153509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.153565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.153798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.153853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.154026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.154080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.154280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.154314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.154518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.154576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.154768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.154822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.155043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.155096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.155278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.155312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.155539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.508 [2024-07-24 20:24:23.155593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.508 qpair failed and we were unable to recover it. 00:30:19.508 [2024-07-24 20:24:23.155771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.155826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.156055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.156089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.156296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.156330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.156563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.156618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.156843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.156897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.157131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.157186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.157348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.157382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.157588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.157644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.157864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.157919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.158126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.158180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.158346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.158380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.158551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.158608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.158816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.158871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.159096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.159157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.159360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.159394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.159620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.159675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.159883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.159936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.160149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.160203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.160393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.160437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.160649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.160716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.160916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.160970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.161178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.161233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.161445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.161479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.161631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.161689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.161910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.161963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.162146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.162199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.162374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.162409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.162621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.162656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.162816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.162872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.163108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.509 [2024-07-24 20:24:23.163160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.509 qpair failed and we were unable to recover it. 00:30:19.509 [2024-07-24 20:24:23.163362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.163396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.163602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.163659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.163879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.163932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.164165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.164219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.164443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.164478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.164697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.164732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.164895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.164950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.165133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.165188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.165396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.165438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.165620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.165655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.165876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.165932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.166167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.166220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.166443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.166478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.166664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.166698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.166906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.166960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.167184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.167239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.167455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.167491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.167713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.167774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.168009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.168064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.168303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.168356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.168541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.168576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.168788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.168842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.169039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.169095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.169276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.169315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.169527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.169583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.169804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.169858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.170076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.170133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.170292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.170326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.170563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.170619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.170823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.170876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.510 [2024-07-24 20:24:23.171060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.510 [2024-07-24 20:24:23.171114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.510 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.171322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.171356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.171572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.171628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.171778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.171847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.172066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.172120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.172338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.172372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.172616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.172670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.172886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.172941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.173159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.173213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.173413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.173456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.173687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.173743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.173966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.174020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.174211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.174266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.174442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.174496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.174684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.174744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.174967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.175021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.175217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.175273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.175503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.175538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.175769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.175823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.176030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.176085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.176278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.176313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.176542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.176577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.176773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.176828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.177033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.177089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.177265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.177299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.177514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.177568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.177753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.177806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.178013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.178047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.178219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.178253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.178454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.178489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.178730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.178786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.178998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.179052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.179218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.179253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.179406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.179460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.179687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.179740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.179978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.180033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.180224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.180277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.511 [2024-07-24 20:24:23.180500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.511 [2024-07-24 20:24:23.180562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.511 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.180767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.180821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.181012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.181068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.181247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.181281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.181459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.181494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.181677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.181730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.181947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.182000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.182205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.182239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.182442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.182477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.182712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.182768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.183005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.183060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.183256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.183310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.183502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.183565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.183770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.183825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.184046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.184100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.184307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.184342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.184535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.184589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.184792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.184847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.185025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.185079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.185283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.185317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.185548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.185604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.185779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.185832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.186063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.186119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.186298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.186333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.186544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.186598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.186781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.186833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.187060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.187115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.187328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.187362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.187555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.187612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.187830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.187887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.188113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.188166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.188368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.188402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.188634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.188691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.188901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.188955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.189179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.189235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.189414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.512 [2024-07-24 20:24:23.189458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.512 qpair failed and we were unable to recover it. 00:30:19.512 [2024-07-24 20:24:23.189718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.189787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.190010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.190062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.190294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.190349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.190555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.190590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.190790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.190846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.191072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.191127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.191349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.191384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.191564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.191620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.191811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.191866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.192077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.192131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.192346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.192380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.192591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.192647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.192845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.192900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.193090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.193142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.193348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.193383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.193555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.193609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.193793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.193849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.194042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.194097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.194273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.194307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.194546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.194601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.194793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.194847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.195069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.195125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.195305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.195340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.195555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.195611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.195776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.195829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.196022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.196077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.196256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.196291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.196508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.196563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.196757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.196811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.197030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.197083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.197294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.197329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.197530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.197583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.197776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.197832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.198031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.198084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.198293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.198327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.513 qpair failed and we were unable to recover it. 00:30:19.513 [2024-07-24 20:24:23.198556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.513 [2024-07-24 20:24:23.198611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.198822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.198877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.199063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.199116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.199322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.199358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.199521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.199576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.199784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.199842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.200069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.200123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.200337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.200372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.200571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.200627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.200796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.200851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.201064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.201120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.201333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.201368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.201590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.201646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.201921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.201987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.202209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.202262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.202521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.202575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.202763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.202819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.202969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.203024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.203237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.203271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.203509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.203573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.203770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.203805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.203980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.204017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.204241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.204284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.204512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.204568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.204794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.204849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.514 [2024-07-24 20:24:23.205038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.514 [2024-07-24 20:24:23.205092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.514 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.205259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.205293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.205510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.205546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.205764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.205798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.206000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.206034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.206243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.206277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.206506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.206563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.206784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.206838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.207032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.207086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.207263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.207297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.207512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.207570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.207749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.207806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.208020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.208054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.208238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.208272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.208517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.208573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.208734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.208768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.208929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.208964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.209100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.209134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.209347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.209382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.209569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.209603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.209803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.209864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.210055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.210112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.210313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.210347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.210569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.210627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.210828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.210884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.211099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.211153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.211342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.211377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.211529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.211586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.211765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.211819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.211998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.212052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.212229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.212264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.212480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.212535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.212725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.212786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.212972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.213032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.213223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.213257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.213440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.213475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.515 [2024-07-24 20:24:23.213666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.515 [2024-07-24 20:24:23.213727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.515 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.213953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.214009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.214208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.214267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.214445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.214482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.214727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.214790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.215007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.215064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.215261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.215296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.215503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.215559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.215787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.215841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.216069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.216122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.216321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.216356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.216517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.216574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.216762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.216816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.217012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.217067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.217269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.217303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.217507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.217567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.217801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.217857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.218051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.218107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.218304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.218339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.218548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.218603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.218819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.218873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.219084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.219139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.219292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.219327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.219557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.219613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.219796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.219855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.220073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.220127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.220274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.220309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.220499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.220558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.220759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.220793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.221002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.221056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.221272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.221306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.221531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.221586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.221791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.221848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.222050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.222120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.222327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.222361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.516 qpair failed and we were unable to recover it. 00:30:19.516 [2024-07-24 20:24:23.222582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-07-24 20:24:23.222637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.222835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.222890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.223126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.223181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.223401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.223459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.223649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.223703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.223937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.223992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.224173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.224227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.224410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.224456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.224677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.224732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.224951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.225007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.225237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.225291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.225513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.225569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.225758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.225811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.226029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.226081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.226281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.226315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.226494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.226557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.226753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.226812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.227039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.227094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.227278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.227311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.227503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.227559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.227747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.227801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.228021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.228075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.228249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.228283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.228499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.228560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.228769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.228822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.229039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.229095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.229264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.229298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.229491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.229553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.229766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.229819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.230011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.230066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.230258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.230292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.230515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.230572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.230798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.230851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.231032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.231085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.231258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.231292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.231461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.231496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.231692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-07-24 20:24:23.231745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.517 qpair failed and we were unable to recover it. 00:30:19.517 [2024-07-24 20:24:23.231990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.232044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.232260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.232294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.232514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.232568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.232785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.232839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.233035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.233090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.233251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.233285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.233532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.233588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.233759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.233815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.234022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.234056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.234220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.234254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.234401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.234469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.234714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.234748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.234905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.234939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.235148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.235182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.235351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.235385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.235586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.235644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.235863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.235918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.236100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.236154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.236370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.236404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.236651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.236712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.236907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.236964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.237185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.237240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.237441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.237476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.237696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.237752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.237967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.238022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.238241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.238294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.238503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.238538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.238754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.238807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.239022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.239074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.239250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.239284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.239498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-07-24 20:24:23.239559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.518 qpair failed and we were unable to recover it. 00:30:19.518 [2024-07-24 20:24:23.239782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.239836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.240051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.240105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.240315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.240349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.240568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.240625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.240857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.240911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.241148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.241201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.241417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.241458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.241681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.241733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.241957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.242011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.242184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.242240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.242418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.242462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.242650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.242705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.242901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.242956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.243189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.243243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.243479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.243546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.243773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.243829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.244012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.244066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.244281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.244336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.244509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.244545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.244740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.244797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.244993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.245047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.245220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.245255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.245439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.245474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.245665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.245722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.245914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.245969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.246200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.246255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.246492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.246555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.519 [2024-07-24 20:24:23.246790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.519 [2024-07-24 20:24:23.246845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.519 qpair failed and we were unable to recover it. 00:30:19.520 [2024-07-24 20:24:23.247029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.520 [2024-07-24 20:24:23.247091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.520 qpair failed and we were unable to recover it. 00:30:19.520 [2024-07-24 20:24:23.247310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.520 [2024-07-24 20:24:23.247364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.520 qpair failed and we were unable to recover it. 00:30:19.520 [2024-07-24 20:24:23.247535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.520 [2024-07-24 20:24:23.247569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.520 qpair failed and we were unable to recover it. 00:30:19.520 [2024-07-24 20:24:23.247797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.520 [2024-07-24 20:24:23.247867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.520 qpair failed and we were unable to recover it. 00:30:19.520 [2024-07-24 20:24:23.248059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.520 [2024-07-24 20:24:23.248093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.520 qpair failed and we were unable to recover it. 00:30:19.520 [2024-07-24 20:24:23.248274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.520 [2024-07-24 20:24:23.248308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.520 qpair failed and we were unable to recover it. 00:30:19.520 [2024-07-24 20:24:23.248517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.520 [2024-07-24 20:24:23.248555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.520 qpair failed and we were unable to recover it. 00:30:19.520 [2024-07-24 20:24:23.248749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.520 [2024-07-24 20:24:23.248784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.520 qpair failed and we were unable to recover it. 00:30:19.520 [2024-07-24 20:24:23.249019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.520 [2024-07-24 20:24:23.249075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.520 qpair failed and we were unable to recover it. 00:30:19.520 [2024-07-24 20:24:23.249259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.520 [2024-07-24 20:24:23.249308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.520 qpair failed and we were unable to recover it. 00:30:19.520 [2024-07-24 20:24:23.249503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.520 [2024-07-24 20:24:23.249564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.520 qpair failed and we were unable to recover it. 00:30:19.520 [2024-07-24 20:24:23.249764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.520 [2024-07-24 20:24:23.249828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.520 qpair failed and we were unable to recover it. 00:30:19.520 [2024-07-24 20:24:23.250047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.520 [2024-07-24 20:24:23.250081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.520 qpair failed and we were unable to recover it. 00:30:19.520 [2024-07-24 20:24:23.250259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.520 [2024-07-24 20:24:23.250294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.520 qpair failed and we were unable to recover it. 00:30:19.520 [2024-07-24 20:24:23.250516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.520 [2024-07-24 20:24:23.250554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.520 qpair failed and we were unable to recover it. 00:30:19.520 [2024-07-24 20:24:23.250753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.520 [2024-07-24 20:24:23.250789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.520 qpair failed and we were unable to recover it. 00:30:19.520 [2024-07-24 20:24:23.250987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.520 [2024-07-24 20:24:23.251020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.520 qpair failed and we were unable to recover it. 00:30:19.520 [2024-07-24 20:24:23.251163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.520 [2024-07-24 20:24:23.251196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.520 qpair failed and we were unable to recover it. 00:30:19.520 [2024-07-24 20:24:23.251366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.520 [2024-07-24 20:24:23.251405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.520 qpair failed and we were unable to recover it. 00:30:19.520 [2024-07-24 20:24:23.251633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.520 [2024-07-24 20:24:23.251694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.520 qpair failed and we were unable to recover it. 00:30:19.520 [2024-07-24 20:24:23.251913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.520 [2024-07-24 20:24:23.251948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.520 qpair failed and we were unable to recover it. 00:30:19.520 [2024-07-24 20:24:23.252141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.520 [2024-07-24 20:24:23.252175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.520 qpair failed and we were unable to recover it. 00:30:19.520 [2024-07-24 20:24:23.252416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.520 [2024-07-24 20:24:23.252485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.520 qpair failed and we were unable to recover it. 00:30:19.816 [2024-07-24 20:24:23.252701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.816 [2024-07-24 20:24:23.252735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.816 qpair failed and we were unable to recover it. 00:30:19.816 [2024-07-24 20:24:23.252904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.816 [2024-07-24 20:24:23.252938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.816 qpair failed and we were unable to recover it. 00:30:19.816 [2024-07-24 20:24:23.253162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.816 [2024-07-24 20:24:23.253197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.816 qpair failed and we were unable to recover it. 00:30:19.816 [2024-07-24 20:24:23.253423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.816 [2024-07-24 20:24:23.253469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.816 qpair failed and we were unable to recover it. 00:30:19.816 [2024-07-24 20:24:23.253648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.816 [2024-07-24 20:24:23.253683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.816 qpair failed and we were unable to recover it. 00:30:19.816 [2024-07-24 20:24:23.253854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.816 [2024-07-24 20:24:23.253888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.816 qpair failed and we were unable to recover it. 00:30:19.816 [2024-07-24 20:24:23.254090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.816 [2024-07-24 20:24:23.254125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.816 qpair failed and we were unable to recover it. 00:30:19.816 [2024-07-24 20:24:23.254323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.816 [2024-07-24 20:24:23.254358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.816 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.254535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.254569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.254723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.254759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.254972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.255006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.255181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.255215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.255416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.255457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.255640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.255674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.255863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.255904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.256086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.256146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.256335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.256372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.256620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.256685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.256923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.256958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.257155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.257206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.257446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.257481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.257697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.257758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.257987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.258043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.258241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.258298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.258500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.258539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.258767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.258833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.259079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.259140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.259278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.259314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.259506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.259561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.259758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.259815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.260070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.260127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.260353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.260389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.260587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.260644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.260880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.260936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.261158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.261217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.261395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.261437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.261670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.261727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.261944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.262002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.262227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.262286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.262513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.262569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.262771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.262828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.263077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.263113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.263331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.263368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.263580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.263633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.263867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.263923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.264148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.264207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.264420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.264474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.264692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.264752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.264978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.265034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.265197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.265254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.265443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.265479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.265703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.265739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.265974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.266031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.266242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.266299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.266478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.266514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.266707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.266762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.266942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.266996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.267224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.267286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.267465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.267502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.267713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.267750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.267966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.268021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.268206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.268244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.268437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.268473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.268666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.268726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.268929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.268984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.269225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.269286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.269498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.269564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.269786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.269858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.270048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.270102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.270278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.270315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.270502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.270564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.270764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.270820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.270979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.271035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.271205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.271241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.271402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.271446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.271633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.271672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.271899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.271956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.272150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.272204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.272378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.272417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.272639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.817 [2024-07-24 20:24:23.272697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.817 qpair failed and we were unable to recover it. 00:30:19.817 [2024-07-24 20:24:23.272860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.272917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.273102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.273157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.273364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.273399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.273587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.273647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.273880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.273936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.274164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.274221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.274422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.274467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.274686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.274723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.274955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.275010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.275233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.275290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.275491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.275529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.275773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.275842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.276035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.276092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.276303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.276338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.276564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.276620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.276828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.276900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.277109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.277165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.277310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.277350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.277580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.277643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.277844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.277900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.278148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.278203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.278407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.278450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.278677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.278741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.278983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.279039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.279235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.279298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.279492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.279559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.279778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.279837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.280075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.280134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.280296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.280332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.280516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.280572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.280759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.280815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.281047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.281102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.281270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.281306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.281535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.281592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.281772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.281832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.282023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.282081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.282269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.282304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.282525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.282561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.282783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.282821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.283002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.283040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.283225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.283261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.283459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.283513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.283735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.283790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.284003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.284061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.284284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.284320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.284544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.284599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.284815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.284875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.285097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.285157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.285332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.285368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.285562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.285620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.285844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.285900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.286086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.286140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.286340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.286375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.286572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.286629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.286786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.286851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.287049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.287110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.287284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.287320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.287539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.287599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.287817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.287875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.288118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.288175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.288357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.288393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.288600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.288661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.288877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.288938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.289172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.289227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.289446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.289483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.289662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.289721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.289896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.289952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.290171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.290231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.290403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.290445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.290671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.290740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.290958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.291017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.291241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.291277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.818 [2024-07-24 20:24:23.291493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.818 [2024-07-24 20:24:23.291558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.818 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.291751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.291807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.292039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.292100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.292272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.292308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.292486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.292520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.292747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.292811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.293027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.293081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.293301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.293339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.293537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.293592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.293810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.293868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.294089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.294151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.294336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.294372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.294661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.294773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.295080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.295150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.295415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.295510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.295713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.295777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.296062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.296128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.296399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.296503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.296730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.296796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.297030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.297095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.297365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.297465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.297703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.297739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.297960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.297997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.298154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.298191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.298387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.298474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.298675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.298718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.298911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.298946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.299125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.299161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.299396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.299496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.299705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.299742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.299902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.299937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.300093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.300137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.300320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.300383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.300643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.300680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.300860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.300941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.301201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.301266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.301526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.301567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.301816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.301880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.302125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.302200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.302477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.302520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.302681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.302717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.302960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.303025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.303260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.303323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.303606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.303645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.303837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.303873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.304097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.304133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.304315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.304390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.304635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.304672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.304859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.304896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.305093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.305157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.305385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.305490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.305721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.305758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.305959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.305995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.306214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.306277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.306496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.306534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.306706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.306758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.306974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.307009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.307185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.307249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.307502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.307556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.307742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.307776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.307932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.307968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.308189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.308253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.308522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.308559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.308744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.308779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.819 [2024-07-24 20:24:23.308968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.819 [2024-07-24 20:24:23.309019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.819 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.309207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.309285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.309541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.309584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.309764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.309811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.310134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.310199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.310394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.310496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.310694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.310731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.310911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.310948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.311172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.311237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.311508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.311544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.311728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.311770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.311958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.311993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.312246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.312320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.312508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.312549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.312701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.312737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.312971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.313029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.313224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.313280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.313517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.313556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.313766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.313825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.314029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.314082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.314313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.314369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.314545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.314585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.314823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.314861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.315094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.315150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.315353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.315389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.315564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.315603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.315831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.315901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.316119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.316182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.316395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.316438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.316608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.316653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.316846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.316912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.317136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.317193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.317400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.317445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.317622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.317660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.317848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.317907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.318069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.318128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.318276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.318311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.318480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.318517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.318706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.318743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.318964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.319000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.319210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.319246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.319435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.319475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.319639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.319703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.319945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.320013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.320207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.320262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.320504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.320559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.320781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.320843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.321038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.321096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.321312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.321347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.321531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.321588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.321803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.321860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.322061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.322118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.322291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.322326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.322544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.322601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.322817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.322874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.323072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.323128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.323348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.323384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.323591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.323647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.323877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.323935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.324151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.324210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.324419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.324463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.324657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.324713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.324934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.324994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.325209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.325265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.325484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.325520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.325740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.325794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.326011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.326065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.326242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.326278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.326503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.326565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.326716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.326788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.326992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.327046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.327259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.327296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.327534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.820 [2024-07-24 20:24:23.327591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.820 qpair failed and we were unable to recover it. 00:30:19.820 [2024-07-24 20:24:23.327792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.327827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.328013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.328078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.328288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.328324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.328541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.328601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.328817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.328874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.329086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.329141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.329342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.329378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.329580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.329634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.329796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.329851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.330040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.330101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.330312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.330351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.330505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.330562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.330793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.330848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.331095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.331130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.331335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.331375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.331558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.331613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.331825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.331882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.332086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.332161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.332369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.332405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.332616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.332674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.332850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.332906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.333136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.333172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.333380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.333416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.333708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.333807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.334107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.334174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.334493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.334555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.334802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.334867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.335114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.335182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.335421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.335516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.335731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.335797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.336061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.336126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.336360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.336447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.336686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.336760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.337035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.337102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.337348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.337412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.337670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.337708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.337929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.337994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.338230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.338297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.338534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.338572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.338778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.338815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.339084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.339147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.339417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.339509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.339702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.339739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.339910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.339947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.340132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.340197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.340471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.340508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.340731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.340767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.340974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.341010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.341170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.341208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.341442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.341479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.341685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.341728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.341908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.341944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.342159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.342238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.342530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.342566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.342750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.342824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.343074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.343139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.343406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.343517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.343754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.343789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.343996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.344033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.344276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.344341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.344634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.344672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.344857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.344922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.345168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.345233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.345463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.345500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.345691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.821 [2024-07-24 20:24:23.345729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.821 qpair failed and we were unable to recover it. 00:30:19.821 [2024-07-24 20:24:23.345961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.346024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.346296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.346362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.346667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.346704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.346983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.347048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.347297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.347360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.347643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.347680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.347959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.347994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.348207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.348274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.348534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.348570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.348791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.348857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.349146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.349211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.349478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.349518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.349730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.349771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.349974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.350011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.350311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.350375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.350656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.350695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.350910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.350955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.351154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.351219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.351486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.351531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.351716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.351752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.351944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.351981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.352142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.352178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.352459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.352520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.352706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.352742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.352931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.352994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.353252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.353316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.353601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.353639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.353881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.353946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.354178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.354241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.354487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.354523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.354697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.354733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.354962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.355025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.355301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.355364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.355653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.355690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.355903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.355939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.356165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.356229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.356488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.356541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.356683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.356751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.356989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.357025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.357245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.357309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.357602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.357639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.357838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.357903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.358166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.358202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.358413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.358510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.358700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.358756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.359016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.359080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.359349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.359384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.359607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.359644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.359834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.359899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.360166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.360229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.360512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.360549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.360776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.360840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.361109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.361172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.361387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.361508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.361688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.361724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.361894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.361958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.362186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.362250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.362501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.362541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.362729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.362765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.362992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.363056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.363291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.363355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.363605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.363642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.363867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.363902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.364181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.364244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.364478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.364515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.364692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.364728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.364881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.364916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.365125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.365190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.365472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.365508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.365705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.365762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.366000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.366035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.366245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.366309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.366569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.366606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.822 qpair failed and we were unable to recover it. 00:30:19.822 [2024-07-24 20:24:23.366799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.822 [2024-07-24 20:24:23.366863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.367118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.367154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.367371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.367450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.367689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.367751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.368030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.368093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.368304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.368340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.368515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.368552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.368768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.368848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.369120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.369185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.369418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.369463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.369669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.369739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.369994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.370058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.370318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.370382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.370634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.370670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.370889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.370952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.371212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.371277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.371517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.371554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.371745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.371781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.371990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.372054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.372249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.372312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.372539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.372576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.372772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.372809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.373032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.373096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.373324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.373388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.373674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.373710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.373996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.374032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.374236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.374300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.374566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.374603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.374830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.374894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.375167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.375203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.375412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.375505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.375712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.375781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.376018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.376082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.376318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.376353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.376571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.376608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.376797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.376861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.377090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.377154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.377414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.377458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.377635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.377694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.377958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.378023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.378293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.378356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.378597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.378633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.378844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.378908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.379184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.379247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.379459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.379524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.379739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.379774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.380052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.380116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.380385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.380485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.380711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.380789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.381024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.381060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.381246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.381310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.381571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.381634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.381900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.381964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.382203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.382238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.382455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.382518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.382764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.382827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.383048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.383111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.383377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.383412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.383630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.383684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.383946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.384009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.384268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.384332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.384585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.384622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.384825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.384889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.385122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.385185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.385464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.385527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.385714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.385749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.385928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.385991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.386253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.386315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.386539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.386575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.386755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.386790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.386988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.387051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.387311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.387374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.387631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.823 [2024-07-24 20:24:23.387667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.823 qpair failed and we were unable to recover it. 00:30:19.823 [2024-07-24 20:24:23.387840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.387876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.388086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.388149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.388387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.388462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.388710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.388781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.389065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.389100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.389311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.389374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.389661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.389697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.389933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.389998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.390266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.390301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.390527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.390564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.390710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.390787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.391036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.391099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.391357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.391421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.391676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.391712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.391903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.391967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.392234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.392298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.392524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.392561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.392757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.392821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.393086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.393150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.393421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.393508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.393694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.393729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.393962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.394025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.394256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.394320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.394576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.394612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.394824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.394860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.395124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.395186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.395416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.395510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.395691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.395749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.395996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.396032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.396243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.396307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.396553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.396590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.396772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.396835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.397084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.397119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.397333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.397397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.397672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.397708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.398000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.398063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.398298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.398334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.398561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.398597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.398824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.398888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.399158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.399221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.399465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.399501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.399648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.399720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.399950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.400013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.400256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.400329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.400634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.400670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.400889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.400953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.401189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.401252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.401504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.401541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.401746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.401782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.402020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.402083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.402319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.402384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.402619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.402655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.402858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.402894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.403106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.403170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.403417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.403503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.403677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.403712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.403953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.403989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.404213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.404277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.404528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.404564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.404792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.404857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.405135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.405171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.405326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.405390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.405676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.405731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.405997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.406060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.406313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.406348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.406532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.406567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.824 [2024-07-24 20:24:23.406735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.824 [2024-07-24 20:24:23.406799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.824 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.407030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.407093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.407333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.407369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.407576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.407611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.407856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.407919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.408205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.408269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.408512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.408549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.408751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.408816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.409053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.409117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.409333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.409398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.409659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.409695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.409897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.409961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.410206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.410268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.410497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.410534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.410672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.410708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.410882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.410957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.411212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.411276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.411528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.411576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.411750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.411791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.412014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.412078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.412347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.412410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.412678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.412714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.412919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.412955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.413175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.413239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.413481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.413536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.413750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.413814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.414062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.414097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.414286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.414350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.414594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.414629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.414844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.414908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.415144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.415179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.415343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.415414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.415659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.415694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.415970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.416034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.416275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.416310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.416514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.416549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.416760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.416824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.417052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.417114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.417371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.417407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.417599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.417635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.417894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.417958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.418225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.418288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.418527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.418564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.418772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.418837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.419101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.419165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.419409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.419507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.419690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.419726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.419913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.419978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.420209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.420273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.420502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.420538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.420713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.420749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.420965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.421028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.421285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.421349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.421604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.421640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.421819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.421855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.422054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.422118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.422388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.422467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.422676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.422732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.422979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.423014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.423213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.423277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.423526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.423562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.423694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.423764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.424010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.424056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.424339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.424404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.424752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.424817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.425099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.425163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.425459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.425495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.425685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.425750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.425996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.426061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.426326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.426390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.426638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.426674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.426875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.426937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.427212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.427275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.427524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.825 [2024-07-24 20:24:23.427591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.825 qpair failed and we were unable to recover it. 00:30:19.825 [2024-07-24 20:24:23.427850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.427886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.428138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.428202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.428472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.428539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.428897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.428960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.429278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.429346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.429600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.429636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.429824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.429887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.430160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.430224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.430541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.430577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.430905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.430968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.431277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.431341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.431621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.431657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.431872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.431913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.432189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.432254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.432527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.432593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.432891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.432959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.433262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.433298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.433605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.433641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.433854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.433918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.434217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.434280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.434527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.434563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.434795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.434860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.435110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.435173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.435410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.435488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.435758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.435794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.436056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.436119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.436459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.436533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.436740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.436804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.437105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.437150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.437492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.437548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.437763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.437827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.438162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.438238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.438529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.438565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.438813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.438878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.439155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.439218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.439454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.439523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.439673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.439709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.439897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.439960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.440219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.440282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.440535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.440576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.440727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.440763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.440916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.440980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.441209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.441272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.441514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.441579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.441818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.441854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.442049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.442113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.442382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.442470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.442762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.442827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.443138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.443173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.443492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.443527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.443735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.443798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.444058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.444121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.444400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.444445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.444656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.444730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.444992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.445055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.445360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.445423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.445680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.445716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.445897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.445961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.446195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.446257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.446588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.446654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.446927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.446962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.447204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.447267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.447532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.447596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.447942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.448005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.448347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.448414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.448755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.448819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.449144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.449210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.826 qpair failed and we were unable to recover it. 00:30:19.826 [2024-07-24 20:24:23.449520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.826 [2024-07-24 20:24:23.449556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.449752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.449788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.450021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.450083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.450366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.450446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.450767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.450831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.451141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.451176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.451404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.451485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.451729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.451792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.452051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.452115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.452446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.452508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.452843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.452907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.453248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.453312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.453677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.453743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.454130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.454204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.454482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.454547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.454815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.454879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.455200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.455263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.455601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.455667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.455938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.456003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.456324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.456387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.456684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.456748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.457044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.457079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.457345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.457408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.457697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.457761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.458031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.458094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.458385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.458420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.458665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.458728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.458998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.459062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.459332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.459396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.459658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.459693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.459916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.459980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.460220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.460284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.460554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.460619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.460888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.460924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.461185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.461248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.461535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.461571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.461751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.461814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.462156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.462227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.462554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.462627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.462938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.463001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.463312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.463385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.463758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.463823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.464109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.464172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.464455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.464525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.464854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.464918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.465192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.465227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.465477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.465542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.465897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.465962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.466264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.466328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.466653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.466690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.466958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.467021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.467293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.467366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.467710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.467772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.468132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.468203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.468538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.468574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.468783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.468846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.469122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.469184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.469523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.469588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.469864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.469927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.470272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.470336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.470623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.470658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.470856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.470892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.471054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.471118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.471417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.471494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.471784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.471848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.472172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.472228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.472542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.472607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.472841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.472904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.473285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.473348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.473718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.473790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.474111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.474175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.474463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.474528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.474791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.827 [2024-07-24 20:24:23.474854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.827 qpair failed and we were unable to recover it. 00:30:19.827 [2024-07-24 20:24:23.475118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.475153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.475357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.475420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.475664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.475699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.475967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.476031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.476234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.476269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.476485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.476549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.476796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.476859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.477092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.477156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.477395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.477447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.477694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.477757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.478093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.478157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.478380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.478461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.478742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.478802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.479083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.479146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.479460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.479533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.479824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.479888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.480186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.480221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.480488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.480553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.480813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.480877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.481172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.481236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.481522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.481558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.481852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.481915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.482281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.482345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.482701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.482772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.483073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.483108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.483374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.483452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.483789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.483853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.484209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.484274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.484620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.484656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.484868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.484931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.485200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.485263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.485613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.485649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.485925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.486002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.486333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.486396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.486743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.486807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.487122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.487185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.487526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.487583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.487938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.488002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.488350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.488414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.488742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.488806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.489157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.489230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.489504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.489569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.489905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.489968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.490206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.490269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.490541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.490577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.490814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.490878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.491119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.491182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.491421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.491512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.491699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.491734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.491982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.492046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.492279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.492342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.492611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.492647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.492850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.492885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.493096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.493160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.493448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.493526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.493825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.493888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.494194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.494229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.494539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.494575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.494796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.494859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.495213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.495277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.495606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.495643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.495891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.495954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.496276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.496340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.496755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.496820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.497116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.497151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.497487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.497523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.497782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.497851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.498165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.498231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.498591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.498630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.498878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.498946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.499281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.499346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.499599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.499639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.499841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.499877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.828 qpair failed and we were unable to recover it. 00:30:19.828 [2024-07-24 20:24:23.500060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.828 [2024-07-24 20:24:23.500132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.500509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.500546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.500826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.500893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.501192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.501233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.501593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.501631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.501775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.501813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.502099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.502165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.502452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.502498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.502708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.502743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.502913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.502950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.503181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.503245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.503481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.503518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.503725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.503760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.503965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.504002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.504280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.504344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.504682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.504719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.504965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.505022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.505336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.505400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.505653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.505690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.505957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.505994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.506198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.506262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.506607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.506648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.506899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.506935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.507193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.507249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.507607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.507644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.507898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.507933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.508184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.508250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.508533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.508573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.508823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.508859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.509103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.509169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.509415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.509514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.509741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.509779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.509987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.510024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.510247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.510310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.510550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.510588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.510774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.510810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.511010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.511046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.511232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.511295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.511565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.511605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.511822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.511858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.512001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.512037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.512253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.512316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.512577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.512618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.512826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.512862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.513038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.513080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.513320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.513383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.513686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.513723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.513920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.513954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.514127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.514199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.514527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.514564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.514779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.514816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.515026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.515066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.515257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.515321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.515640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.515683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.515954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.515990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.516221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.516257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.516560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.516603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.516795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.516833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.517016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.517069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.517294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.517331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.517670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.517707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.517922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.517962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.518219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.518285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.518619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.518657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.518806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.518841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.519014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.519051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.519394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.519476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.829 [2024-07-24 20:24:23.519714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.829 [2024-07-24 20:24:23.519751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.829 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.519957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.519998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.520183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.520247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.520478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.520540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.520771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.520813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.521026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.521062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.521280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.521344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.521644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.521682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.521889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.521924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.522139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.522199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.522407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.522516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.522830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.522867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.523180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.523247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.523553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.523597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.523781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.523817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.523995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.524031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.524215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.524250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.524492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.524534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.524702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.524738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.524928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.524983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.525224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.525259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.525475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.525517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.525694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.525730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.525932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.526005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.526287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.526361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.526669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.526708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.526881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.526916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.527122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.527187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.527535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.527573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.527848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.527887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.528096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.528161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.528492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.528555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.528790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.528826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.529048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.529085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.529398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.529511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.529693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.529730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.529955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.529994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.530233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.530313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.530650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.530687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.530859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.530895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.531076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.531113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.531466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.531539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.531808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.531845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.532064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.532101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.532448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.532486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.532731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.532773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.533018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.533083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.533385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.533493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.533825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.533863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.534170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.534236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.534560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.534597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.534863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.534929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.535239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.535307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.535687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.535724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.535917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.535987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.536379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.536462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.536761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.536836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.537195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.537270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.537605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.537642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.537886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.537966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.538326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.538365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.538625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.538662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.538953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.539033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.539328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.539392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.539693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.539740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.539922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.539994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.540187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.540261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.540587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.540623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.540803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.540842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.541054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.541090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.541300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.541379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.541718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.541798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.542119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.542162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.830 [2024-07-24 20:24:23.542393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.830 [2024-07-24 20:24:23.542486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.830 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.542807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.542883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.543232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.543297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.543609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.543646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.543934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.543998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.544325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.544390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.544722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.544804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.545131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.545168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.545552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.545590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.545867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.545948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.546232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.546294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.546633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.546671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.546909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.546945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.547285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.547352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.547671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.547744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.547985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.548022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.548219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.548291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.548561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.548597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.548785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.548853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.549118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.549155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.549330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.549393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.549664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.549700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.549891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.549955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.550242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.550278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.550493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.550549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.550766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.550834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.551098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.551168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.551471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.551509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.551739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.551806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.552042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.552106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.552348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.552418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.552712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.552748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.552938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.553013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.553251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.553316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.553577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.553615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.553847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.553884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.554101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.554168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.554498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.554534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.554784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.554821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.555067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.555142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.555466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.555547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.555841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.555918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.556169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.556234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.556511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.556548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.556759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.556823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.557085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.557157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.557499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.557536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.557752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.557789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.558134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.558198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.558508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.558546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.558800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.558871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.559213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.559249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.559598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.559635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.559790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.559826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.560045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.560117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.560483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.560527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.560761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.560797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.561001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.561071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.561355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.561419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.561748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.561785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.561966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.562009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.562295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.562361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.562723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.562766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.562979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.563014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.563215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.563250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.563393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.563427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.563694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.563731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.563927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.563967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.831 [2024-07-24 20:24:23.564179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.831 [2024-07-24 20:24:23.564245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.831 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.564518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.564556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.564732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.564771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.564982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.565017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.565239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.565274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.565480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.565516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.565725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.565762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.565942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.565977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.566212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.566278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.566545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.566581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.566797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.566834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.567038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.567075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.567259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.567323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.567634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.567678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.567868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.567904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.568114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.568151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.568416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.568520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.568796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.568833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.569062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.569134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.569479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.569516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.569712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.569778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.570044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.570109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.570442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.570479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.570691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.570731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.570955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.570989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.571156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.571237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.571503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.571540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.571694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.571731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.571948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.572013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.572264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.572299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.572506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.572543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.572759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.572796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.573037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.573101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.573372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.573480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.573780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.573845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.574209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.574282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.574585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.574623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.574822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.574856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.575060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.575096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:19.832 [2024-07-24 20:24:23.575297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.832 [2024-07-24 20:24:23.575335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:19.832 qpair failed and we were unable to recover it. 00:30:20.104 [2024-07-24 20:24:23.575556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.104 [2024-07-24 20:24:23.575603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.104 qpair failed and we were unable to recover it. 00:30:20.104 [2024-07-24 20:24:23.575883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.104 [2024-07-24 20:24:23.575917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.104 qpair failed and we were unable to recover it. 00:30:20.104 [2024-07-24 20:24:23.576073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.104 [2024-07-24 20:24:23.576113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.104 qpair failed and we were unable to recover it. 00:30:20.104 [2024-07-24 20:24:23.576363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.104 [2024-07-24 20:24:23.576398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.104 qpair failed and we were unable to recover it. 00:30:20.104 [2024-07-24 20:24:23.576656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.104 [2024-07-24 20:24:23.576691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.104 qpair failed and we were unable to recover it. 00:30:20.104 [2024-07-24 20:24:23.576959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.104 [2024-07-24 20:24:23.576993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.104 qpair failed and we were unable to recover it. 00:30:20.104 [2024-07-24 20:24:23.577248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.104 [2024-07-24 20:24:23.577282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.104 qpair failed and we were unable to recover it. 00:30:20.104 [2024-07-24 20:24:23.577507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.104 [2024-07-24 20:24:23.577542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.104 qpair failed and we were unable to recover it. 00:30:20.104 [2024-07-24 20:24:23.577691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.104 [2024-07-24 20:24:23.577724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.104 qpair failed and we were unable to recover it. 00:30:20.104 [2024-07-24 20:24:23.577983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.104 [2024-07-24 20:24:23.578017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.104 qpair failed and we were unable to recover it. 00:30:20.104 [2024-07-24 20:24:23.578276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.104 [2024-07-24 20:24:23.578309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.104 qpair failed and we were unable to recover it. 00:30:20.104 [2024-07-24 20:24:23.578534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.104 [2024-07-24 20:24:23.578570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.104 qpair failed and we were unable to recover it. 00:30:20.104 [2024-07-24 20:24:23.578835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.104 [2024-07-24 20:24:23.578909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.104 qpair failed and we were unable to recover it. 00:30:20.104 [2024-07-24 20:24:23.579227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.104 [2024-07-24 20:24:23.579290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.104 qpair failed and we were unable to recover it. 00:30:20.104 [2024-07-24 20:24:23.579580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.104 [2024-07-24 20:24:23.579615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.104 qpair failed and we were unable to recover it. 00:30:20.104 [2024-07-24 20:24:23.579841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.104 [2024-07-24 20:24:23.579876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.104 qpair failed and we were unable to recover it. 00:30:20.104 [2024-07-24 20:24:23.580139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.104 [2024-07-24 20:24:23.580202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.104 qpair failed and we were unable to recover it. 00:30:20.104 [2024-07-24 20:24:23.580496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.104 [2024-07-24 20:24:23.580553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.104 qpair failed and we were unable to recover it. 00:30:20.104 [2024-07-24 20:24:23.580769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.104 [2024-07-24 20:24:23.580803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.104 qpair failed and we were unable to recover it. 00:30:20.104 [2024-07-24 20:24:23.581076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.104 [2024-07-24 20:24:23.581111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.104 qpair failed and we were unable to recover it. 00:30:20.104 [2024-07-24 20:24:23.581450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.104 [2024-07-24 20:24:23.581524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.104 qpair failed and we were unable to recover it. 00:30:20.104 [2024-07-24 20:24:23.581803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.104 [2024-07-24 20:24:23.581866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.104 qpair failed and we were unable to recover it. 00:30:20.104 [2024-07-24 20:24:23.582228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.582292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.582548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.582585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.582766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.582829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.583085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.583147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.583416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.583523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.583707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.583743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.583990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.584053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.584314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.584376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.584688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.584742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.585076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.585128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.585480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.585542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.585782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.585848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.586092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.586155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.586402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.586449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.586626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.586661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.586920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.586985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.587267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.587340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.587673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.587710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.588010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.588073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.588350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.588414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.588715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.588778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.589087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.589122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.589364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.589448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.589667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.589703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.589985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.590048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.590288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.590351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.590633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.590669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.590863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.590926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.591178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.591241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.591598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.591677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.591972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.592033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.592337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.592400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.592750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.592814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.593071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.593107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.593310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.593373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.593662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.593698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.593973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.594036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.594333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.105 [2024-07-24 20:24:23.594368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.105 qpair failed and we were unable to recover it. 00:30:20.105 [2024-07-24 20:24:23.594638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.594674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.594902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.594965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.595320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.595383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.595756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.595833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.596167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.596231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.596525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.596591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.596895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.596959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.597276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.597311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.597570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.597612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.597900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.597965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.598260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.598323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.598644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.598681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.598941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.599004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.599246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.599309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.599559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.599624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.599882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.599918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.600131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.600193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.600422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.600513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.600735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.600799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.600996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.601032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.601204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.601266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.601505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.601571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.601854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.601917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.602213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.602247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.602451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.602519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.602728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.602798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.603033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.603096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.603333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.603369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.603592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.603628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.603831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.603894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.604123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.604186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.604476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.604512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.604822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.604886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.605125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.605188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.605533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.605598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.605951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.606030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.606343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.606407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.606694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.606755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.106 qpair failed and we were unable to recover it. 00:30:20.106 [2024-07-24 20:24:23.607111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.106 [2024-07-24 20:24:23.607174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.607478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.607514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.607841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.607905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.608179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.608242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.608476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.608541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.608815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.608851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.609069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.609132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.609366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.609447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.609669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.609703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.609915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.609951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.610103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.610165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.610462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.610526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.610761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.610824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.611067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.611102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.611305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.611367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.611631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.611667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.611890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.611953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.612231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.612266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.612515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.612551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.612778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.612841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.613131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.613194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.613458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.613494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.613785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.613849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.614156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.614219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.614500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.614566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.614879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.614915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.615229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.615292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.615597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.615633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.615866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.615929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.616227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.616263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.616499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.616563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.616831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.616893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.617205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.617269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.617621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.617658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.617978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.618042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.618394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.618475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.618700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.618734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.618983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.107 [2024-07-24 20:24:23.619018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.107 qpair failed and we were unable to recover it. 00:30:20.107 [2024-07-24 20:24:23.619274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.619313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.619616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.619650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.619887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.619920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.620098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.620132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.620398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.620477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.620766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.620829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.621086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.621149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.621462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.621497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.621813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.621876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.622225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.622288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.622549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.622584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.622848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.622920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.623202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.623264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.623615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.623680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.623975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.624039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.624296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.624331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.624542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.624607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.624816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.624879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.625160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.625222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.625573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.625609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.625856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.625919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.626255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.626318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.626645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.626682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.626913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.626948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.627264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.627327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.627676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.627713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.627985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.628048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.628311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.628346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.628608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.628644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.628847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.628910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.629188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.629250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.629601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.629671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.630042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.630106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.630462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.630526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.630803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.630867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.108 qpair failed and we were unable to recover it. 00:30:20.108 [2024-07-24 20:24:23.631130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.108 [2024-07-24 20:24:23.631165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.631388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.631484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.631664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.631701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.631935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.631998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.632271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.632306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.632491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.632556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.632784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.632857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.633186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.633249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.633536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.633572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.633793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.633856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.634209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.634272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.634613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.634649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.634888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.634924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.635274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.635337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.635717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.635789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.636070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.636133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.636450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.636487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.636826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.636890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.637238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.637302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.637656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.637693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.638085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.638149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.638473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.638528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.638759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.638822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.639081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.639145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.639466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.639502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.639864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.639927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.640267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.640331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.640629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.640664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.640848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.640883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.641220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.641283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.641583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.641618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.641886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.641949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.642256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.109 [2024-07-24 20:24:23.642291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.109 qpair failed and we were unable to recover it. 00:30:20.109 [2024-07-24 20:24:23.642650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.642723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.643020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.643083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.643325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.643389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.643697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.643733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.644056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.644119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.644452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.644511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.644772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.644836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.645170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.645233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.645573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.645638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.645999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.646063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.646379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.646454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.646696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.646731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.646957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.647021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.647283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.647345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.647686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.647723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.648101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.648178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.648535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.648571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.648818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.648882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.649174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.649236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.649502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.649538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.649835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.649899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.650212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.650275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.650612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.650676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.651026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.651099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.651398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.651476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.651775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.651839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.652134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.652197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.652504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.652539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.652802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.652867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.653140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.653202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.653541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.653605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.653883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.653918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.654112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.654174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.654508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.654572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.654836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.654899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.655165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.655200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.655393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.655471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.655708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.655777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.110 qpair failed and we were unable to recover it. 00:30:20.110 [2024-07-24 20:24:23.656033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.110 [2024-07-24 20:24:23.656096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.656401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.656451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.656722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.656786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.657094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.657166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.657538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.657603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.658011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.658075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.658455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.658524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.658784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.658844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.659129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.659192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.659457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.659508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.659685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.659744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.659977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.660041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.660272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.660335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.660584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.660619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.660815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.660878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.661138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.661202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.661511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.661547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.661739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.661775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.662056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.662119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.662496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.662531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.662865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.662901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.663145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.663213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.663523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.663587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.663891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.663955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.664256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.664319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.664680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.664744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.665053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.665116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.665351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.665414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.665794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.665858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.666174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.666210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.666595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.666670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.666996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.667059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.667392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.667470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.667754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.667820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.668143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.668207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.668526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.668561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.668792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.668856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.669184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.669252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.669517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.111 [2024-07-24 20:24:23.669553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.111 qpair failed and we were unable to recover it. 00:30:20.111 [2024-07-24 20:24:23.669797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.669869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.670169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.670231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.670564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.670635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.670932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.670994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.671211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.671273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.671608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.671672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.672013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.672086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.672411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.672503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.672793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.672856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.673173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.673237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.673554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.673590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.673855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.673919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.674184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.674246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.674548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.674584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.674767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.674802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.675033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.675096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.675457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.675523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.675787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.675850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.676181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.676245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.676601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.676666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.676983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.677047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.677368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.677446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.677740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.677805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.678132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.678194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.678454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.678524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.678753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.678816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.679047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.679082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.679310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.679373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.679659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.679695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.679934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.679997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.680321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.680376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.680764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.680831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.681175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.681247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.681538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.681604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.681872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.681907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.682171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.682234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.682538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.682574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.682877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.112 [2024-07-24 20:24:23.682942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.112 qpair failed and we were unable to recover it. 00:30:20.112 [2024-07-24 20:24:23.683198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.683233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.683410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.683493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.683735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.683798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.684164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.684228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.684537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.684573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.684854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.684918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.685173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.685236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.685493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.685558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.685891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.685945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.686271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.686335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.686621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.686655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.686849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.686912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.687183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.687218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.687456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.687525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.687755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.687818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.688122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.688186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.688550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.688587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.688906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.688969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.689295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.689358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.689681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.689717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.690008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.690043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.690342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.690416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.690782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.690846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.691144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.691207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.691523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.691559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.691799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.691863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.692116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.692179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.692448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.692520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.692733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.692769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.693007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.693070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.693326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.693389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.693700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.693756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.694058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.113 [2024-07-24 20:24:23.694093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.113 qpair failed and we were unable to recover it. 00:30:20.113 [2024-07-24 20:24:23.694448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.694522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.694787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.694850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.695158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.695222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.695532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.695569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.695861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.695924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.696292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.696355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.696682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.696717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.697059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.697128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.697465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.697531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.697831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.697894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.698208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.698268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.698575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.698611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.698931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.698994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.699272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.699334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.699706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.699781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.700093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.700129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.700514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.700579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.700887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.700950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.701307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.701370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.701766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.701835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.702134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.702197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.702543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.702607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.702966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.703030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.703283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.703319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.703512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.703577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.703888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.703951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.704238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.704302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.704562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.704598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.704837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.704899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.705170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.705256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.705554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.705591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.705905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.705969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.706258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.706320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.706646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.706682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.706922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.706985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.707327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.707401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.707702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.707772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.708070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.708133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.114 [2024-07-24 20:24:23.708365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.114 [2024-07-24 20:24:23.708446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.114 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.708700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.708736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.708947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.709011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.709273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.709336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.709703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.709771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.710134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.710211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.710517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.710553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.710774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.710838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.711106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.711170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.711454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.711490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.711679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.711742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.712011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.712075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.712426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.712515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.712787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.712861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.713203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.713265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.713595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.713660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.714011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.714075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.714355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.714390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.714589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.714625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.714836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.714900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.715133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.715197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.715514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.715550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.715824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.715888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.716160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.716224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.716533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.716598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.716912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.716948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.717246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.717309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.717639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.717675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.717950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.718014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.718337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.718395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.718778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.718843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.719124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.719187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.719445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.719519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.719748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.719783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.720088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.720151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.720502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.720567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.720867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.720929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.721219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.721254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.115 qpair failed and we were unable to recover it. 00:30:20.115 [2024-07-24 20:24:23.721526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.115 [2024-07-24 20:24:23.721591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.721898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.721961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.722315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.722379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.722764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.722830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.723138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.723202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.723506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.723592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.723909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.723971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.724322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.724397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.724807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.724875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.725204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.725267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.725623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.725688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.726047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.726126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.726474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.726540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.726838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.726901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.727237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.727300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.727628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.727682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.727942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.728006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.728292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.728356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.728641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.728676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.728885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.728921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.729103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.729166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.729450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.729511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.729751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.729814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.730134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.730171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.730477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.730542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.730808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.730871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.731185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.731248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.731580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.731646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.731956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.732019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.732345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.732408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.732760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.732823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.733177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.733253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.733479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.733542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.733872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.733935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.734233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.734298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.734617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.734655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.734935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.734999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.116 qpair failed and we were unable to recover it. 00:30:20.116 [2024-07-24 20:24:23.735238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.116 [2024-07-24 20:24:23.735301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.735559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.735625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.735883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.735919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.736137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.736201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.736463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.736539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.736833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.736896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.737251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.737326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.737691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.737728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.738056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.738119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.738479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.738546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.738876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.738950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.739265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.739328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.739769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.739871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.740243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.740312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.740635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.740672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.741007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.741072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.741342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.741406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.741766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.741831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.742210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.742275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.742601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.742638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.742864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.742928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.743237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.743301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.743613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.743650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.743904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.743969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.744289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.744352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.744740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.744819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.745097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.745133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.745363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.745445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.745694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.745750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.745983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.746047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.746281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.746317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.746558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.746624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.746895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.746959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.747253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.747317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.747565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.747601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.747855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.747924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.748232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.748297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.748634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.748673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.117 [2024-07-24 20:24:23.748853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.117 [2024-07-24 20:24:23.748889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.117 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.749147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.749215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.749533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.749570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.749819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.749886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.750229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.750297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.750621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.750658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.750820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.750904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.751163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.751228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.751500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.751538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.751737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.751774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.751982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.752018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.752241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.752306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.752589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.752627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.752815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.752851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.753017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.753059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.753294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.753358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.753644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.753681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.753829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.753864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.754070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.754126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.754365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.754445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.754702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.754739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.754922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.754966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.755198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.755263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.755535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.755575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.755773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.755817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.756082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.756119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.756472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.756540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.756771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.756808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.757048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.757085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.757395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.757509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.757769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.757806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.758093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.118 [2024-07-24 20:24:23.758130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.118 qpair failed and we were unable to recover it. 00:30:20.118 [2024-07-24 20:24:23.758367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.758451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.758683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.758721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.758925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.758964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.759172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.759237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.759512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.759557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.759816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.759852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.760059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.760126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.760388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.760476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.760704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.760741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.760958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.760994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.761230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.761295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.761559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.761601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.761790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.761826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.762000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.762037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.762262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.762326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.762626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.762701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.762959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.762995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.763167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.763234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.763602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.763668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.764000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.764067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.764378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.764419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.764798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.764863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.765235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.765302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.765632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.765717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.766040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.766075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.766479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.766518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.766745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.766780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.767030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.767097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.767353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.767417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.767770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.767808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.767981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.768017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.768241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.768306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.768567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.768606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.768797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.768834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.769009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.769048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.769259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.769324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.769590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.769630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.769800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.119 [2024-07-24 20:24:23.769836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.119 qpair failed and we were unable to recover it. 00:30:20.119 [2024-07-24 20:24:23.770058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.770095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.770329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.770393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.770648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.770685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.770838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.770873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.771079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.771116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.771311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.771376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.771668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.771705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.771868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.771904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.772122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.772189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.772464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.772534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.772749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.772787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.772966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.773004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.773192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.773234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.773449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.773486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.773694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.773737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.773921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.773958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.774172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.774214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.774484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.774541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.774726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.774769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.774957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.774992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.775178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.775214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.775398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.775482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.775739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.775776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.775966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.776002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.776210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.776247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.776513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.776550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.776731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.776768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.776916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.776951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.777125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.777162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.777386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.777470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.777718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.777755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.777926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.777968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.778135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.778177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.778464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.778529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.778744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.778781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.778926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.778962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.779150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.779187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.779444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.120 [2024-07-24 20:24:23.779522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.120 qpair failed and we were unable to recover it. 00:30:20.120 [2024-07-24 20:24:23.779704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.779740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.779944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.779979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.780222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.780287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.780540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.780584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.780770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.780807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.781009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.781050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.781241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.781305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.781559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.781598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.781812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.781849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.782071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.782107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.782307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.782371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.782618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.782656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.782875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.782911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.783099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.783136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.783415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.783523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.783703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.783746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.783923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.783959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.784207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.784273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.784556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.784599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.784850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.784887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.785092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.785129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.785327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.785392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.785703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.785740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.785973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.786011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.786233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.786269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.786576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.786615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.786867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.786904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.787186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.787252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.787536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.787581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.787808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.787844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.788055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.788092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.788388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.788480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.788694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.788731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.788934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.788978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.789178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.789242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.789514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.789556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.789750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.789786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.789943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.121 [2024-07-24 20:24:23.789985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.121 qpair failed and we were unable to recover it. 00:30:20.121 [2024-07-24 20:24:23.790183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.790248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.790542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.790583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.790817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.790854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.791131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.791201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.791507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.791549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.791760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.791797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.792000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.792044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.792232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.792297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.792521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.792559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.792759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.792796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.793010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.793046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.793199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.793263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.793525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.793568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.793765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.793803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.793954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.793991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.794190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.794255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.794494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.794553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.794738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.794775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.794919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.794959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.795168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.795231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.795506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.795551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.795778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.795815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.796037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.796074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.796271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.796335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.796619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.796657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.796860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.796896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.797110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.797148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.797458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.797522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.797749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.797786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.797996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.798033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.798204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.798239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.798504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.798546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.798760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.798797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.798988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.799054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.799306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.799344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.799508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.799545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.122 [2024-07-24 20:24:23.799748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.122 [2024-07-24 20:24:23.799784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.122 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.800018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.800084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.800351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.800459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.800681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.800718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.800901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.800937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.801129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.801193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.801455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.801499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.801713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.801749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.801900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.801936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.802171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.802246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.802508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.802545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.802730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.802765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.802946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.802983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.803259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.803324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.803596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.803634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.803825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.803860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.804045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.804111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.804378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.804466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.804716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.804753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.804905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.804941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.805172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.805237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.805465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.805532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.805721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.805757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.805898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.805934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.806122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.806158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.806362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.806402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.806650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.806706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.806895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.806934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.807126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.807181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.807390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.807463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.807602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.807639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.807806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.123 [2024-07-24 20:24:23.807841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.123 qpair failed and we were unable to recover it. 00:30:20.123 [2024-07-24 20:24:23.808070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.808127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.808359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.808420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.808653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.808688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.808899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.808954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.809173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.809235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.809382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.809418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.809606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.809642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.809866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.809919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.810143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.810199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.810408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.810453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.810635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.810671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.810901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.810958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.811134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.811189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.811402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.811448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.811659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.811695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.811883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.811946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.812165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.812222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.812441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.812477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.812669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.812706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.812936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.812992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.813215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.813273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.813480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.813517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.813740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.813802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.814028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.814084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.814303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.814360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.814542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.814579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.814804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.814860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.815060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.815120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.815336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.815372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.815626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.815686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.815977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.816034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.816257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.816295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.816541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.816599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.816844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.816913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.817113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.817167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.817368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.817404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.817678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.817737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.817960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.124 [2024-07-24 20:24:23.818016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.124 qpair failed and we were unable to recover it. 00:30:20.124 [2024-07-24 20:24:23.818248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.818307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.818523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.818579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.818849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.818904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.819089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.819149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.819324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.819360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.819576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.819636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.819870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.819943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.820167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.820225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.820405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.820454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.820619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.820675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.820868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.820930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.821149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.821204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.821423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.821468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.821652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.821688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.821873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.821929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.822129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.822182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.822399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.822445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.822660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.822696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.822857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.822913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.823149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.823213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.823453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.823490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.823722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.823778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.823970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.824027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.824245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.824306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.824538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.824594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.824791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.824848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.825065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.825122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.825300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.825336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.825511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.825569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.825796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.825852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.826039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.826096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.826261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.826298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.826524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.826581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.826825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.826882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.827060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.827112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.827297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.827335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.827566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.827621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.125 [2024-07-24 20:24:23.827851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.125 [2024-07-24 20:24:23.827912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.125 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.828084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.828143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.828350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.828385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.828592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.828647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.828836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.828894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.829122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.829183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.829370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.829405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.829609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.829665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.829849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.829904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.830113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.830177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.830387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.830422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.830624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.830679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.830875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.830929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.831114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.831170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.831371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.831406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.831611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.831667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.831851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.831904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.832098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.832155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.832361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.832396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.832613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.832669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.832859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.832915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.833137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.833193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.833398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.833444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.833687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.833742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.833930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.833985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.834215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.834270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.834478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.834515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.834734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.834789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.834977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.835034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.835236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.835292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.835512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.835569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.835757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.835810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.836004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.836060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.836262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.836297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.836501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.836536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.836711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.836746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.836957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.836993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.837146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.126 [2024-07-24 20:24:23.837181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.126 qpair failed and we were unable to recover it. 00:30:20.126 [2024-07-24 20:24:23.837390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.837425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.837604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.837658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.837887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.837943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.838150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.838206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.838421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.838467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.838660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.838716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.838901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.838953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.839134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.839189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.839395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.839440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.839627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.839683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.839858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.839913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.840162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.840222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.840462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.840498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.840721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.840775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.840996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.841052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.841251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.841311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.841527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.841562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.841832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.841889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.842115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.842168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.842386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.842421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.842610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.842645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.842864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.842920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.843145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.843201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.843424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.843468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.843672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.843707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.843902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.843957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.844190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.844244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.844497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.844533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.844755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.844811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.845062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.845118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.845373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.845407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.127 [2024-07-24 20:24:23.845639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.127 [2024-07-24 20:24:23.845675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.127 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.845870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.845925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.846156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.846212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.846379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.846423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.846643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.846678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.846853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.846908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.847149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.847205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.847398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.847442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.847672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.847734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.848000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.848061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.848301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.848356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.848530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.848565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.848759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.848815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.849023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.849077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.849214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.849249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.849416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.849472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.849693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.849747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.849979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.850035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.850272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.850326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.850591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.850627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.850876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.850935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.851165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.851220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.851449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.851484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.851713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.851750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.852035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.852090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.852277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.852331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.852550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.852606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.852805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.852860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.853080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.853135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.853283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.853318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.853505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.853563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.853730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.853786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.853978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.854033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.854201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.854236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.854448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.854483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.854670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.854725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.854888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.854941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.855124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.855181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.128 [2024-07-24 20:24:23.855384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.128 [2024-07-24 20:24:23.855419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.128 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.855672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.855737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.855944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.856002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.856204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.856258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.856491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.856557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.856781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.856837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.857053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.857109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.857264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.857299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.857524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.857581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.857807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.857862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.858021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.858076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.858251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.858287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.858489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.858524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.858695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.858731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.858908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.858943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.859112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.859147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.859330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.859365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.859557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.859614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.859834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.859889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.860076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.860132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.860339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.860374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.860575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.860631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.860875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.860937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.861200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.861256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.861458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.861495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.861719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.861773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.861996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.862051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.862268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.862322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.862582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.862637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.862872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.862912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.863142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.863204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.863466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.863502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.863760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.863795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.864031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.864085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.864282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.864338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.864480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.864516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.864719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.864774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.864994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.865049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.129 qpair failed and we were unable to recover it. 00:30:20.129 [2024-07-24 20:24:23.865285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.129 [2024-07-24 20:24:23.865339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.865524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.865580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.865803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.865856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.866039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.866095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.866295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.866330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.866566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.866622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.866840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.866894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.867113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.867168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.867374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.867409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.867635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.867703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.867879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.867934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.868084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.868140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.868341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.868376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.868556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.868611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.868778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.868833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.869004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.869060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.869272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.869307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.869515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.869551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.869752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.869788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.869995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.870030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.870199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.870233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.870407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.870451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.870656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.870726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.870951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.870985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.871180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.871252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.871436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.871488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.871693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.871727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.871945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.871981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.872210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.872244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.872480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.872549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.872786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.872839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.873076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.873109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.873251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.873285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.873461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.873496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.873669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.873703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.873888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.873949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.874232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.874296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.874525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.130 [2024-07-24 20:24:23.874585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.130 qpair failed and we were unable to recover it. 00:30:20.130 [2024-07-24 20:24:23.874824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.131 [2024-07-24 20:24:23.874878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.131 qpair failed and we were unable to recover it. 00:30:20.131 [2024-07-24 20:24:23.875034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.131 [2024-07-24 20:24:23.875068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.131 qpair failed and we were unable to recover it. 00:30:20.131 [2024-07-24 20:24:23.875276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.131 [2024-07-24 20:24:23.875309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.131 qpair failed and we were unable to recover it. 00:30:20.407 [2024-07-24 20:24:23.875480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.407 [2024-07-24 20:24:23.875514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.407 qpair failed and we were unable to recover it. 00:30:20.407 [2024-07-24 20:24:23.875717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.407 [2024-07-24 20:24:23.875750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.407 qpair failed and we were unable to recover it. 00:30:20.407 [2024-07-24 20:24:23.875951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.407 [2024-07-24 20:24:23.875984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.407 qpair failed and we were unable to recover it. 00:30:20.407 [2024-07-24 20:24:23.876180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.407 [2024-07-24 20:24:23.876213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.407 qpair failed and we were unable to recover it. 00:30:20.407 [2024-07-24 20:24:23.876388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.407 [2024-07-24 20:24:23.876421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.407 qpair failed and we were unable to recover it. 00:30:20.407 [2024-07-24 20:24:23.876592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.407 [2024-07-24 20:24:23.876632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.407 qpair failed and we were unable to recover it. 00:30:20.407 [2024-07-24 20:24:23.876833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.407 [2024-07-24 20:24:23.876867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.407 qpair failed and we were unable to recover it. 00:30:20.407 [2024-07-24 20:24:23.877043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.407 [2024-07-24 20:24:23.877076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.407 qpair failed and we were unable to recover it. 00:30:20.407 [2024-07-24 20:24:23.877257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.407 [2024-07-24 20:24:23.877290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.407 qpair failed and we were unable to recover it. 00:30:20.407 [2024-07-24 20:24:23.877472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.407 [2024-07-24 20:24:23.877507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.407 qpair failed and we were unable to recover it. 00:30:20.407 [2024-07-24 20:24:23.877723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.407 [2024-07-24 20:24:23.877756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.407 qpair failed and we were unable to recover it. 00:30:20.407 [2024-07-24 20:24:23.877916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.407 [2024-07-24 20:24:23.877950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.407 qpair failed and we were unable to recover it. 00:30:20.407 [2024-07-24 20:24:23.878155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.407 [2024-07-24 20:24:23.878188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.407 qpair failed and we were unable to recover it. 00:30:20.407 [2024-07-24 20:24:23.878361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.407 [2024-07-24 20:24:23.878395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.407 qpair failed and we were unable to recover it. 00:30:20.407 [2024-07-24 20:24:23.878577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.407 [2024-07-24 20:24:23.878612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.407 qpair failed and we were unable to recover it. 00:30:20.407 [2024-07-24 20:24:23.878837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.407 [2024-07-24 20:24:23.878870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.407 qpair failed and we were unable to recover it. 00:30:20.407 [2024-07-24 20:24:23.879077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.407 [2024-07-24 20:24:23.879132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.407 qpair failed and we were unable to recover it. 00:30:20.407 [2024-07-24 20:24:23.879316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.407 [2024-07-24 20:24:23.879350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.407 qpair failed and we were unable to recover it. 00:30:20.407 [2024-07-24 20:24:23.879553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.407 [2024-07-24 20:24:23.879609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.879775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.879830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.880014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.880069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.880250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.880285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.880489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.880552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.880746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.880807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.880981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.881035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.881212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.881247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.881397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.881443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.881646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.881713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.881935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.881991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.882210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.882263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.882488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.882524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.882743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.882801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.883017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.883073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.883283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.883318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.883532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.883567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.883785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.883838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.884057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.884112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.884290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.884325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.884547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.884603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.884826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.884881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.885069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.885124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.885307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.885342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.885558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.885614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.885837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.885893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.886056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.886112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.886285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.886321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.886537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.886595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.886817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.886873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.887063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.887117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.887311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.887346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.887544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.887605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.887795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.887849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.888033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.888087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.888295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.888330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.888542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.408 [2024-07-24 20:24:23.888599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.408 qpair failed and we were unable to recover it. 00:30:20.408 [2024-07-24 20:24:23.888837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.888893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.889108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.889162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.889378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.889414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.889746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.889803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.890034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.890089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.890306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.890342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.890563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.890618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.890817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.890881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.891099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.891155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.891380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.891416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.891672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.891727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.891961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.892015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.892231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.892287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.892455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.892491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.892704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.892759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.892971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.893025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.893213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.893248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.893423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.893467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.893696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.893754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.893964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.894020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.894250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.894304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.894498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.894565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.894725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.894781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.894996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.895053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.895251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.895286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.895453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.895489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.895689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.895744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.895928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.895985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.896184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.896219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.896370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.896405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.896612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.896668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.896882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.896936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.897122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.897176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.897373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.897408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.897610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.897664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.897889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.897951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.898154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.898207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.898387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.409 [2024-07-24 20:24:23.898422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.409 qpair failed and we were unable to recover it. 00:30:20.409 [2024-07-24 20:24:23.898633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.898688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.898882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.898939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.899158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.899212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.899392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.899426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.899662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.899718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.899947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.900001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.900196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.900253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.900455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.900491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.900650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.900708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.900926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.900981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.901201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.901257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.901489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.901545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.901773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.901826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.902053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.902110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.902324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.902359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.902485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.902520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.902736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.902791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.903010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.903065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.903280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.903315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.903534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.903587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.903807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.903861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.904085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.904140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.904313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.904348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.904537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.904592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.904824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.904880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.905096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.905152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.905369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.905404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.905622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.905679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.905873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.905929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.906153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.906209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.906396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.906446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.906585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.906642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.906855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.906909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.907108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.907164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.907368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.907403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.907607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.907661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.907882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.907939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.410 [2024-07-24 20:24:23.908175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.410 [2024-07-24 20:24:23.908236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.410 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.908447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.908483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.908642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.908677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.908898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.908955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.909141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.909196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.909352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.909386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.909538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.909584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.909775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.909830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.910053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.910107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.910314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.910349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.910561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.910616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.910840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.910893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.911081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.911136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.911351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.911386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.911596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.911654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.911881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.911938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.912163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.912222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.912442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.912477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.912672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.912730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.912958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.913013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.913236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.913291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.913479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.913515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.913686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.913746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.913952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.914007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.914231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.914288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.914514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.914569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.914784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.914840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.915058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.915115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.915317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.915351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.915534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.915569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.915737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.915794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.916012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.916066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.916244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.916278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.916451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.916486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.916672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.916731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.916928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.916985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.917180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.917213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.917394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.411 [2024-07-24 20:24:23.917436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.411 qpair failed and we were unable to recover it. 00:30:20.411 [2024-07-24 20:24:23.917657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.917722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.917947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.918008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.918212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.918250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.918459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.918493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.918697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.918731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.918944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.918994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.919202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.919236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.919442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.919478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.919680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.919715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.919903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.919958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.920180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.920235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.920452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.920488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.920710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.920767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.920958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.921015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.921241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.921297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.921486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.921523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.921712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.921768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.921954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.922010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.922228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.922283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.922461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.922497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.922681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.922736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.922937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.922992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.923209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.923244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.923449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.923498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.923716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.923772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.923994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.924050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.924268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.924325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.924540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.924597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.924813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.924866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.925095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.925149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.412 qpair failed and we were unable to recover it. 00:30:20.412 [2024-07-24 20:24:23.925359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.412 [2024-07-24 20:24:23.925394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.925617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.925673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.925874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.925930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.926128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.926181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.926384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.926419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.926663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.926725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.926945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.927002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.927229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.927284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.927479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.927544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.927772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.927826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.928017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.928074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.928242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.928277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.928455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.928497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.928661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.928720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.928943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.928998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.929210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.929245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.929415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.929467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.929664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.929721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.929931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.929987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.930169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.930225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.930405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.930458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.930648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.930718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.930893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.930948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.931159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.931214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.931442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.931491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.931658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.931720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.931953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.932009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.932205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.932241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.932438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.932474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.932640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.932695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.932918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.932973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.933191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.933246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.933454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.933491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.933712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.933746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.933924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.933979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.934161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.934217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.934423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.934478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.934693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.413 [2024-07-24 20:24:23.934728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.413 qpair failed and we were unable to recover it. 00:30:20.413 [2024-07-24 20:24:23.934943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.934999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.935215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.935270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.935484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.935520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.935736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.935792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.935980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.936037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.936196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.936251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.936460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.936496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.936660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.936720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.936912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.936966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.937183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.937240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.937413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.937460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.937667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.937730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.937972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.938027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.938226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.938279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.938466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.938507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.938683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.938739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.938940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.938995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.939211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.939266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.939472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.939507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.939722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.939778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.939960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.940015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.940215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.940250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.940442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.940478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.940701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.940755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.940989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.941043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.941257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.941313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.941502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.941561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.941747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.941802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.942030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.942083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.942294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.942328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.942547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.942602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.942767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.942822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.942984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.943040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.943228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.943263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.943480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.943535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.943746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.943800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.944041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.944096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.414 [2024-07-24 20:24:23.944273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.414 [2024-07-24 20:24:23.944308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.414 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.944539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.944596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.944830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.944886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.945048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.945103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.945310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.945345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.945532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.945588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.945772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.945827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.946016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.946072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.946252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.946287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.946480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.946546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.946769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.946824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.947045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.947101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.947302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.947337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.947497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.947558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.947774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.947838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.948031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.948086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.948291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.948326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.948550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.948613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.948798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.948855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.949094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.949148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.949320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.949355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.949541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.949597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.949819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.949874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.950084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.950139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.950315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.950350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.950507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.950569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.950789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.950843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.951071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.951127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.951333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.951368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.951568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.951622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.951832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.951887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.952111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.952167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.952376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.952410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.952613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.952667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.952888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.952944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.953174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.953229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.953411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.953454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.953645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.953698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.415 qpair failed and we were unable to recover it. 00:30:20.415 [2024-07-24 20:24:23.953920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.415 [2024-07-24 20:24:23.953976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.954210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.954266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.954544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.954601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.954877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.954945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.955173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.955228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.955400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.955443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.955640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.955701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.955960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.956015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.956238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.956294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.956529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.956583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.956798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.956851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.957047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.957103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.957307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.957342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.957531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.957588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.957783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.957836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.958047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.958104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.958306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.958341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.958515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.958549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.958726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.958782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.959010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.959072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.959248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.959283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.959494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.959555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.959752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.959807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.960003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.960059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.960233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.960268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.960462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.960498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.960680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.960734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.960959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.961014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.961181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.961215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.961435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.961471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.961710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.961773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.961984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.962039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.416 qpair failed and we were unable to recover it. 00:30:20.416 [2024-07-24 20:24:23.962264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.416 [2024-07-24 20:24:23.962319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.962512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.962573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.962800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.962856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.963048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.963104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.963279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.963314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.963525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.963581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.963772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.963828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.964046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.964101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.964279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.964314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.964536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.964590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.964807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.964861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.965085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.965140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.965316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.965351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.965573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.965628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.965858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.965914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.966095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.966148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.966353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.966388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.966606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.966656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.966883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.966940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.967171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.967228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.967448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.967484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.967641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.967676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.967898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.967953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.968157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.968213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.968368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.968402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.968603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.968639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.968861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.968914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.969118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.969180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.969330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.969364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.969522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.969580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.969770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.969826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.969982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.970016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.970224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.970259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.970491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.970553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.970768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.970825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.971045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.971100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.971313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.971349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.971545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.417 [2024-07-24 20:24:23.971580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.417 qpair failed and we were unable to recover it. 00:30:20.417 [2024-07-24 20:24:23.971722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.971782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.971996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.972049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.972266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.972301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.972523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.972579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.972807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.972862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.973048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.973105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.973302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.973337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.973549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.973606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.973794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.973848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.974031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.974085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.974254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.974289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.974450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.974486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.974671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.974727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.974949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.975005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.975220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.975255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.975482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.975547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.975779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.975834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.975990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.976044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.976226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.976282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.976436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.976471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.976622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.976678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.976815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.976872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.977090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.977145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.977346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.977380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.977578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.977635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.977805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.977840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.978040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.978075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.978252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.978286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.978502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.978561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.978747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.978806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.978999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.979054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.979228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.979263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.979485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.979521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.979718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.979782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.980009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.980065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.980248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.980284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.980490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.980556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.980774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.418 [2024-07-24 20:24:23.980831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.418 qpair failed and we were unable to recover it. 00:30:20.418 [2024-07-24 20:24:23.981052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.981108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.981288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.981323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.981540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.981598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.981762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.981819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.982057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.982112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.982324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.982360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.982553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.982616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.982806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.982861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.983052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.983109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.983322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.983357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.983575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.983631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.983826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.983881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.984077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.984132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.984330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.984365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.984551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.984608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.984794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.984850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.985011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.985067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.985267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.985302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.985493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.985556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.985779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.985834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.986023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.986078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.986225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.986260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.986460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.986497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.986703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.986764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.986953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2172960 Killed "${NVMF_APP[@]}" "$@" 00:30:20.419 [2024-07-24 20:24:23.987009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.987196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.987249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.987453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 20:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:20.419 [2024-07-24 20:24:23.987488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.987688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.987742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 20:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.987963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.988018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b9 20:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:20.419 0 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.988246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 20:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:20.419 [2024-07-24 20:24:23.988301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 20:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.419 [2024-07-24 20:24:23.988491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.988554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.988783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.988839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.989060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.989115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.989324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.989359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.989527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.989583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.419 qpair failed and we were unable to recover it. 00:30:20.419 [2024-07-24 20:24:23.989792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.419 [2024-07-24 20:24:23.989847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.990062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.990119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.990320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.990354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.990551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.990608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.990837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.990892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.991096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.991152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.991358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.991392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.991596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.991654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.991872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.991927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.992123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.992179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.992365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.992400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.992623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.992679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.992896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.992952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.993118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.993173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.993351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.993385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.993618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.993676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.993902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.993957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 20:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2173511 00:30:20.420 [2024-07-24 20:24:23.994177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.994233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b9 20:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:20.420 0 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 20:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2173511 00:30:20.420 [2024-07-24 20:24:23.994447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.994483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 20:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2173511 ']' 00:30:20.420 [2024-07-24 20:24:23.994691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.994746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 20:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:20.420 [2024-07-24 20:24:23.994963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.995020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 20:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:20.420 [2024-07-24 20:24:23.995242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.995298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 20:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:20.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 20:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:20.420 [2024-07-24 20:24:23.995471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.995508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 20:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.420 [2024-07-24 20:24:23.995712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.995767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.995969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.996025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.996179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.996234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.996415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.996461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.996650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.996718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.996941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.996995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.997203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.997255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.997468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.997524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.997727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.997793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.997998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.998055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.998231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.420 [2024-07-24 20:24:23.998266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.420 qpair failed and we were unable to recover it. 00:30:20.420 [2024-07-24 20:24:23.998495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:23.998552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:23.998771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:23.998828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:23.999058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:23.999115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:23.999325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:23.999360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:23.999554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:23.999610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:23.999798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:23.999857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.000064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.000122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.000292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.000328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.000498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.000571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.000784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.000844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.001051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.001107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.001301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.001336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.001530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.001596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.001815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.001872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.002066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.002125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.002314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.002350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.002546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.002602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.002763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.002819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.003053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.003089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.003274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.003314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.003540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.003598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.003835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.003901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.004088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.004147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.004358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.004393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.004579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.004615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.004831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.004871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.005065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.005121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.005332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.005367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.005571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.005633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.005800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.005856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.006006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.006065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.421 [2024-07-24 20:24:24.006276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.421 [2024-07-24 20:24:24.006312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.421 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.006483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.006549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.006730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.006785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.006964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.007024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.007242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.007280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.007493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.007560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.007751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.007806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.008006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.008064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.008245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.008280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.008462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.008500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.008733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.008789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.009001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.009065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.009245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.009280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.009443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.009479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.009701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.009752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.009921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.009977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.010155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.010191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.010399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.010452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.010684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.010741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.010910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.010969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.011165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.011230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.011465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.011502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.011684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.011740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.011958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.012018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.012200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.012257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.012444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.012501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.012696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.012756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.012979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.013036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.013263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.013318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.013545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.013603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.013763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.013818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.014041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.014077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.014235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.014270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.014454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.014493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.014687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.014744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.014954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.015011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.015219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.015257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.015442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.015478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.422 [2024-07-24 20:24:24.015661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.422 [2024-07-24 20:24:24.015718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.422 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.015928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.015987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.016217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.016281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.016455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.016492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.016695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.016756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.016935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.016990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.017277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.017378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.017643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.017682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.017961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.018033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.018308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.018376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.018654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.018691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.018903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.018969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.019221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.019312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.019672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.019771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.020119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.020211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.020534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.020584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.020839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.020929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.021284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.021356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.021649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.021686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.021876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.021954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.022192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.022259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.022522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.022569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.022837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.022886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.023104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.023195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.023519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.023568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.023805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.023894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.024224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.024317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.024617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.024654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.024858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.024923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.025165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.025230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.025448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.025494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.025704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.025753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.025961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.026011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.026352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.026459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.026754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.026840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.027175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.027245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.027498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.027535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.423 [2024-07-24 20:24:24.027747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.423 [2024-07-24 20:24:24.027812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.423 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.028080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.028146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.028420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.028518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.028755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.028820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.029181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.029269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.029616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.029666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.030026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.030118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.030490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.030529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.030755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.030821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.031111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.031148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.031405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.031511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.031842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.031931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.032259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.032347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.032702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.032752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.033111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.033200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.033534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.033631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.033943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.034010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.034255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.034291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.034524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.034590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.034857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.034924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.035295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.035385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.035685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.035734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.036060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.036141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.036402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.036488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.036774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.036841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.037091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.037127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.037339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.037405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.037701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.037767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.038080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.038144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.038397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.038452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.038638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.038703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.038968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.039033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.039270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.039335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.039605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.039642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.039829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.039894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.040102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.040167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.040416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.040500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.040761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.040797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.424 [2024-07-24 20:24:24.040977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.424 [2024-07-24 20:24:24.041013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.424 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.041295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.041359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.041612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.041678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.041889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.041925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.042134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.042170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.042377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.042458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.042734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.042797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.043045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.043081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.043242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.043290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.043490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.043557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.043794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.043859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.044070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.044107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.044278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.044313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.044569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.044636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.044897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.044962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.045213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.045249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.045456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.045493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.045718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.045782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.046027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.046091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.046377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.046413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.046619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.046693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.046959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.047022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.047276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.047341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.047615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.047652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.047833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.047903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.048145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.048210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.048479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.048546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.048775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.048810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.048995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.049060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.049319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.049383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.049685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.049751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.050012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.050048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.050279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.050342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.050618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.050684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.050935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.051000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.425 [2024-07-24 20:24:24.051276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.425 [2024-07-24 20:24:24.051312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.425 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.051553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.051619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.051911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.051975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.052288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.052353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.052652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.052688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.052931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.052994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.053239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.053304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.053597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.053663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.053907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.053942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.054138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.054204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.054476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.054542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.054789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.054853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.055131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.055167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.055370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.055450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.055674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.055739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.056009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.056076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.056356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.056397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.056628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.056700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.056973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.057037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.057290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.057354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.057672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.057708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.057881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.057945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.058205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.058270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.058276] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:30:20.426 [2024-07-24 20:24:24.058397] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:20.426 [2024-07-24 20:24:24.058547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.058613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.058854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.058888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.059065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.059099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.059302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.059367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.059634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.059700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.059919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.059960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.060143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.060179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.060406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.060486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.060763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.060828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.061108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.061145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.061303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.061340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.426 qpair failed and we were unable to recover it. 00:30:20.426 [2024-07-24 20:24:24.061490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.426 [2024-07-24 20:24:24.061557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.061826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.061892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.062153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.062188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.062337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.062373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.062578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.062644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.064297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.064371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.064695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.064732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.064958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.064995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.065174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.065240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.065488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.065556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.065812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.065850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.066027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.066063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.066242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.066320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.066549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.066616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.066845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.066883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.067068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.067104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.067320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.067408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.067685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.067752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.068008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.068045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.068281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.068346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.068572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.068621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.068859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.068910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.069123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.069167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.069518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.069611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.069946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.070035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.070312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.070406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.070733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.070771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.070960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.071026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.071239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.071304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.071538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.071604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.071854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.071903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.072148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.072197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.072503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.072596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.072896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.072984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.073303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.073360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.073596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.073634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.073838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.073874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.074141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.427 [2024-07-24 20:24:24.074206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.427 qpair failed and we were unable to recover it. 00:30:20.427 [2024-07-24 20:24:24.074489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.074539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.074763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.074812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.075041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.075128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.075453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.075543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.075885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.075934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.076242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.076312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.076558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.076626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.076875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.076940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.077193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.077229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.077467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.077559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.077887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.077976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.078343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.078456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.078802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.078867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.079193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.079283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.079660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.079733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.080026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.080092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.080400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.080448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.080635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.080672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.080850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.080887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.081058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.081124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.081407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.081469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.081718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.081809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.082196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.082288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.082675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.082765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.083056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.083107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.083342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.083379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.083566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.083602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.083849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.083914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.084234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.084284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.084538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.084588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.084851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.084942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.085227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.085316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.085635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.085684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.086005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.086076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.086354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.086419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.086715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.086781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.087057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.087099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.087346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.428 [2024-07-24 20:24:24.087453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.428 qpair failed and we were unable to recover it. 00:30:20.428 [2024-07-24 20:24:24.087782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.087870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.088240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.088331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.088658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.088708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.089039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.089128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.089466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.089539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.089812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.089877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.090192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.090228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.090519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.090586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.090842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.090930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.091292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.091382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.091703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.091752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.092055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.092146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.092518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.092610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.092953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.093043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.093309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.093346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.093527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.093600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.093870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.093935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.094154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.094218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.094492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.094543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.094756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.094828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.095164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.095254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.095619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.095710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.096034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.096083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.096336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.096385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.096621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.096707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.097033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.097122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.097423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.097485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.097722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.097808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.098104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.098190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.098486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.098534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.098745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.098791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.099043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.099127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.099456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.099534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.099721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.099768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.099938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.099983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.100231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.100315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.100645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.100693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.100926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.429 [2024-07-24 20:24:24.100973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.429 qpair failed and we were unable to recover it. 00:30:20.429 [2024-07-24 20:24:24.101180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.101226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.101520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.101568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.101786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.101832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.102033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.102129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.102402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.102460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.102664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.102710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.102933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.102982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.103219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.103303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.103588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.103635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.103896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.103942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.104201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.104284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.104628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.104675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.104955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.105001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.105235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.105288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.105530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.105586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.105849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.105932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.106226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.106271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.106615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.106661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.106950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.107033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.107303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.107383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.107679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.107725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.108032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.108116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.108472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.108520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.108790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.108874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.109172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.109218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.109518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.109564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.109767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.109813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.110044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.110125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.430 [2024-07-24 20:24:24.110445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.110494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.110772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.110818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.111055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.111137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.111464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.111538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.430 qpair failed and we were unable to recover it. 00:30:20.430 [2024-07-24 20:24:24.111794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.430 [2024-07-24 20:24:24.111840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.112065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.112109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.112380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.112478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.112749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.112795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.113018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.113065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.113296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.113379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.113699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.113745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.113949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.113995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.114198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.114243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x578ea0 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.114598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.114663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.114875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.114922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.115099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.115136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.115346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.115383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.115592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.115628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.115781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.115825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.116069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.116134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.116408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.116463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.116684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.116720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.116924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.116960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.117167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.117208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.117393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.117441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.117654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.117690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.117913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.117979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.118230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.118296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.118635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.118673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.118879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.118916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.119090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.119126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.119329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.119366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.119539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.119576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.119764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.119800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.120043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.120107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.120380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.120469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.120698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.120734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.120876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.120913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.121096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.121131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.121358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.121403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.121635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.121689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.121914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.121951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.431 [2024-07-24 20:24:24.122198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.431 [2024-07-24 20:24:24.122265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.431 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.122521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.122572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.122812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.122850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.123055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.123090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.123296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.123333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.123529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.123570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.123794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.123830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.124089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.124159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.124426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.124505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.124715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.124757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.124909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.124944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.125140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.125188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.125380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.125416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.125639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.125677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.125935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.125971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.126180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.126217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.126393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.126444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.126635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.126671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.126860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.126896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.127109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.127144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.127330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.127365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.127553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.127590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.127763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.127800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.128004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.128040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.128252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.128289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.128490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.128527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.128715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.128751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.128929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.128966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.129188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.129252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.129518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.129555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.129738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.129774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.129965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.130010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.130212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.130248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.130423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.130481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.130624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.130659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.130839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.130876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.131051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.131115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.131378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.131496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.432 qpair failed and we were unable to recover it. 00:30:20.432 [2024-07-24 20:24:24.131723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.432 [2024-07-24 20:24:24.131768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.131974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.132011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.132264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.132329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.132591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.132658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.132891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.132926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.133142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.133207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.133472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.133539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.133781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.133845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.134075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.134111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.134327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.134391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.134650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.134715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.134950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.135015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.135244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.135280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.135475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.135552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.135816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.135880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.136118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.136183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.136414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.136459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.136648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.136712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.136985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.137049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.137259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.137322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.137644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.137681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.137941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.138005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.138265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.138328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.138588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.138652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.138920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.138955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.139154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.139218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.139486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.139580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.139834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.139900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.140144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.140179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.140384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.140468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.140682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.140746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.140983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.141047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.141295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.141330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.141539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.141605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.141881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.141944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.142233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.142296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.142546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.142581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.142795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.142858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.433 qpair failed and we were unable to recover it. 00:30:20.433 [2024-07-24 20:24:24.143169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.433 [2024-07-24 20:24:24.143234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.143505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.143572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.143841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.143877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.144074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.144138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.144378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.144459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.144726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.144791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.145055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.145090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.145311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.145374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.145703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.145768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.146027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.146091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.146324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.146359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.146571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.146637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.146916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.146980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.147273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.147337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.147574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.147611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.147809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.147883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.148146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.148211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.148461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.148527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.148744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.148780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.148991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.149055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.149331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.149396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.149631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.149698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.149950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.149986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.150215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.150280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.150522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.150589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.150865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.150930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.151202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.151237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.151447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.151513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.151828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.151891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.152151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.152216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.152481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.152518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.152705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.152770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.153016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.153080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.153305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.153370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.153590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.153626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.153823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.153886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.154154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.154218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.154419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.434 [2024-07-24 20:24:24.154499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.434 qpair failed and we were unable to recover it. 00:30:20.434 [2024-07-24 20:24:24.154752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.154787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.154974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.155041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.155276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.155340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.156348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.156421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.156712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.156751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.156934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.157000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.157263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.157330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.157578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.157645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.157961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.157998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.158259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.158323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.158584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.158651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.158926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.158992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.159260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.159295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.159481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.159548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.159823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.159889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.160181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.160246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.160485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.160521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.160720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.160794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.161021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.161085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.161332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.161396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.161640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.161676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.161904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.161968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.162215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.162280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.162523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.162588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.162833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.162872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.163086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.163148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.163392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.163493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.163712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.163775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.164022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.164056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.164260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.164324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.164555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.164621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.435 qpair failed and we were unable to recover it. 00:30:20.435 [2024-07-24 20:24:24.164937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.435 [2024-07-24 20:24:24.165002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.165278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.165313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.165510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.165576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.165830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.165894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.166191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.166254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.166509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.166545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.166711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.166779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.167015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.167079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.167409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.167485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.167708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.167742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.167944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.168007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.168265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.168328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.168554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.168591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.168799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.168853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.169106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.169163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.169389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.169455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.169625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.169660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.169832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.169866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.170066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.170099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.170273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.170306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.170515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.170551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.170720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.170774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.171063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.171096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.171328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.171363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.171538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.171572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.171785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.171841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.172032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.172093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.172247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.172281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.172493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.172527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.172577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:20.436 [2024-07-24 20:24:24.172685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.172718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.172912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.172944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.173167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.173201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.173396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.173436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.173613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.173671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.173859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.173894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.436 [2024-07-24 20:24:24.174161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.436 [2024-07-24 20:24:24.174232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.436 qpair failed and we were unable to recover it. 00:30:20.437 [2024-07-24 20:24:24.174416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.437 [2024-07-24 20:24:24.174455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.437 qpair failed and we were unable to recover it. 00:30:20.437 [2024-07-24 20:24:24.174627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.437 [2024-07-24 20:24:24.174660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.437 qpair failed and we were unable to recover it. 00:30:20.715 [2024-07-24 20:24:24.174884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.715 [2024-07-24 20:24:24.174918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.715 qpair failed and we were unable to recover it. 00:30:20.715 [2024-07-24 20:24:24.175121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.715 [2024-07-24 20:24:24.175154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.715 qpair failed and we were unable to recover it. 00:30:20.715 [2024-07-24 20:24:24.175368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.715 [2024-07-24 20:24:24.175401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.715 qpair failed and we were unable to recover it. 00:30:20.715 [2024-07-24 20:24:24.175570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.715 [2024-07-24 20:24:24.175617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.715 qpair failed and we were unable to recover it. 00:30:20.715 [2024-07-24 20:24:24.175846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.715 [2024-07-24 20:24:24.175893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.715 qpair failed and we were unable to recover it. 00:30:20.715 [2024-07-24 20:24:24.176100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.715 [2024-07-24 20:24:24.176136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.715 qpair failed and we were unable to recover it. 00:30:20.715 [2024-07-24 20:24:24.176405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.715 [2024-07-24 20:24:24.176449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.715 qpair failed and we were unable to recover it. 00:30:20.715 [2024-07-24 20:24:24.176607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.715 [2024-07-24 20:24:24.176641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.715 qpair failed and we were unable to recover it. 00:30:20.715 [2024-07-24 20:24:24.176842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.715 [2024-07-24 20:24:24.176875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.715 qpair failed and we were unable to recover it. 00:30:20.715 [2024-07-24 20:24:24.177094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.715 [2024-07-24 20:24:24.177127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.715 qpair failed and we were unable to recover it. 00:30:20.715 [2024-07-24 20:24:24.177366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.715 [2024-07-24 20:24:24.177399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.715 qpair failed and we were unable to recover it. 00:30:20.715 [2024-07-24 20:24:24.177574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.715 [2024-07-24 20:24:24.177608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.715 qpair failed and we were unable to recover it. 00:30:20.715 [2024-07-24 20:24:24.177838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.715 [2024-07-24 20:24:24.177871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.715 qpair failed and we were unable to recover it. 00:30:20.715 [2024-07-24 20:24:24.178122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.715 [2024-07-24 20:24:24.178155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.715 qpair failed and we were unable to recover it. 00:30:20.715 [2024-07-24 20:24:24.178377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.715 [2024-07-24 20:24:24.178410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.715 qpair failed and we were unable to recover it. 00:30:20.715 [2024-07-24 20:24:24.178574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.715 [2024-07-24 20:24:24.178607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.715 qpair failed and we were unable to recover it. 00:30:20.715 [2024-07-24 20:24:24.178784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.715 [2024-07-24 20:24:24.178817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.715 qpair failed and we were unable to recover it. 00:30:20.715 [2024-07-24 20:24:24.179000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.715 [2024-07-24 20:24:24.179034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.715 qpair failed and we were unable to recover it. 00:30:20.715 [2024-07-24 20:24:24.179182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.715 [2024-07-24 20:24:24.179215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.715 qpair failed and we were unable to recover it. 00:30:20.715 [2024-07-24 20:24:24.179380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.715 [2024-07-24 20:24:24.179412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.715 qpair failed and we were unable to recover it. 00:30:20.715 [2024-07-24 20:24:24.179611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.715 [2024-07-24 20:24:24.179644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.715 qpair failed and we were unable to recover it. 00:30:20.715 [2024-07-24 20:24:24.179779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.715 [2024-07-24 20:24:24.179812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.179984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.180027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.180203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.180243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.180474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.180508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.180655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.180704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.180890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.180943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.181126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.181179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.181359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.181400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.181557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.181610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.181870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.181924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.182174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.182227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.182402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.182444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.182627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.182686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.182870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.182923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.183117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.183169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.183329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.183363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.183539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.183593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.183785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.183819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.184004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.184056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.184260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.184294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.184496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.184555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.184793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.184827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.185043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.185077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.185248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.185285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.185454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.185489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.185685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.185739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.185991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.186044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.186224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.186259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.186486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.186545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.186705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.186760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.186966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.187020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.187283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.187337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.187501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.187558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.187707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.187764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.187958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.188012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.188205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.188240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.188416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.188461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.188645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.716 [2024-07-24 20:24:24.188700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.716 qpair failed and we were unable to recover it. 00:30:20.716 [2024-07-24 20:24:24.188861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.188915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.189164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.189217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.189372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.189407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.189574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.189610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.189803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.189856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.190046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.190099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.190281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.190316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.190543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.190596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.190777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.190829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.191017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.191075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.191257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.191291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.191468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.191504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.191671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.191726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.191910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.191965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.192148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.192183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.192346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.192380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.192553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.192608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.192744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.192797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.192983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.193037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.193231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.193266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.193463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.193517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.193723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.193775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.193945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.194002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.194189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.194223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.194412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.194456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.194637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.194695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.194916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.194969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.195161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.195215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.196251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.196294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.196512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.196571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.197692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.197734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.197971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.198025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.198900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.198940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.199133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.199169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.200022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.200062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.200236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.200272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.201126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.201166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.717 [2024-07-24 20:24:24.201350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.717 [2024-07-24 20:24:24.201386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.717 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.202515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.202555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.202722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.202778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.203686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.203726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.203912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.203966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.204827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.204867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.205042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.205077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.205977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.206017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.206232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.206268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.207365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.207407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.207665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.207702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.207860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.207896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.208082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.208123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.208300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.208344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.208546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.208602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.209574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.209614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.209843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.209898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.210089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.210142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.210299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.210334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.210526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.210588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.210802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.210855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.211014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.211069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.211244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.211279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.211495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.211553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.211745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.211797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.211995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.212048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.212245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.212279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.212438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.212473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.212646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.212699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.212894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.212949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.213132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.213167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.213321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.213355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.213559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.213614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.213801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.213854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.214712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.214752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.214972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.215026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.216122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.216163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.216347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.718 [2024-07-24 20:24:24.216383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.718 qpair failed and we were unable to recover it. 00:30:20.718 [2024-07-24 20:24:24.217295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.217336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.217559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.217614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.217837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.217889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.218093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.218147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.218352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.218387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.218604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.218660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.218844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.218897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.219083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.219137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.219341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.219375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.219564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.219619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.219772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.219826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.220041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.220093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.220297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.220331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.220500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.220557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.220754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.220812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.221756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.221795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.222001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.222055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.222246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.222284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.222483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.222541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.222759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.222813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.223000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.223055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.223223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.223256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.223447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.223481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.223695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.223761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.223974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.224007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.224208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.224242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.224421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.224462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.224647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.224705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.224936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.224970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.225138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.225171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.225384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.225417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.225619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.225677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.225899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.225954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.226174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.226228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.719 [2024-07-24 20:24:24.226410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.719 [2024-07-24 20:24:24.226488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.719 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.226652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.226709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.226922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.226975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.227158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.227214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.227419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.227463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.227649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.227707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.227875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.227929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.228078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.228137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.228327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.228362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.228548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.228583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.228762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.228816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.229029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.229087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.229299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.229333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.229503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.229558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.229769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.229827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.230011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.230068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.230245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.230279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.230422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.230471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.230678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.230739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.230927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.230981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.231203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.231265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.231447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.231501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.231699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.231761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.231951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.232005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.232192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.232227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.232406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.232450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.232635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.232702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.232901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.232956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.233160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.233216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.233401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.233444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.233623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.233678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.233880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.233941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.720 [2024-07-24 20:24:24.234145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.720 [2024-07-24 20:24:24.234201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.720 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.234373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.234407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.234619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.234675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.234842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.234898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.235086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.235148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.235306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.235341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.235492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.235549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.235741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.235800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.235987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.236039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.236185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.236219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.236398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.236438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.236636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.236695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.236879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.236935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.237159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.237216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.237401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.237442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.237621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.237676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.237880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.237935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.238153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.238210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.238391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.238426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.238589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.238642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.238870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.238925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.239093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.239156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.239320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.239354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.239532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.239567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.239723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.239778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.239997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.240072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.240328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.240363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.240570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.240624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.240819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.240881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.241073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.241129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.241379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.241413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.241608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.241665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.241920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.241974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.242188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.242243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.242490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.242556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.242695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.242755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.242978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.243031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.243233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.243290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.721 qpair failed and we were unable to recover it. 00:30:20.721 [2024-07-24 20:24:24.243463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.721 [2024-07-24 20:24:24.243525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.243695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.243753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.243978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.244035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.244219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.244253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.244485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.244521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.244733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.244787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.244940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.244995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.245171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.245205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.245387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.245422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.245622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.245678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.245942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.246007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.246253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.246319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.246550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.246607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.246788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.246845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.247120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.247177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.247380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.247415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.247648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.247717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.247870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.247929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.248162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.248227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.248467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.248519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.248735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.248801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.249062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.249132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.249366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.249415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.249673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.249743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.249971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.250037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.250302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.250344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.250546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.250603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.250810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.250877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.251061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.251116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.251340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.251389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.251620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.251695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.251940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.252006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.252257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.252307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.252579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.252630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.252855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.252920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.253136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.253190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.253369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.253403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.253578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.722 [2024-07-24 20:24:24.253635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.722 qpair failed and we were unable to recover it. 00:30:20.722 [2024-07-24 20:24:24.253888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.253955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.254238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.254287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.254526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.254593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.254847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.254912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.255170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.255235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.255494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.255533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.255750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.255808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.256069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.256125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.256409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.256456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.256662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.256733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.257009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.257072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.257349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.257398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.257675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.257744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.258001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.258065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.258296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.258333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.258572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.258630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.258925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.258986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.259266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.259332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.259587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.259653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.259972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.260050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.260301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.260349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.260552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.260618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.260861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.260912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.261213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.261282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.261464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.261500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.261688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.261744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.262016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.262080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.262375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.262424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.262673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.262744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.263050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.263117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.263390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.263464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.263664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.263732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.263939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.264007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.264268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.264325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.264519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.264566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.264804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.264870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.265132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.265197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.265477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.723 [2024-07-24 20:24:24.265528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.723 qpair failed and we were unable to recover it. 00:30:20.723 [2024-07-24 20:24:24.265718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.265790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.266050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.266112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.266393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.266436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.266606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.266641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.266835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.266891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.267131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.267179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.267389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.267447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.267681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.267730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.267996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.268046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.268217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.268267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.268491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.268543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.268760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.268820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.269052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.269118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.269347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.269381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.269579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.269647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.269972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.270039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.270237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.270279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.270500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.270575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.270826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.270874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.271190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.271263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.271525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.271605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.271883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.271933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.272204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.272268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.272514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.272581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.272854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.272904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.273135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.273172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.273387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.273422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.273589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.273647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.273854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.273917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.274148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.274221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.274439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.274487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.724 [2024-07-24 20:24:24.274662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.724 [2024-07-24 20:24:24.274744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.724 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.274969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.275034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.275253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.275300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.275518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.275597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.275795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.275853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.277272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.277326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.277583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.277650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.277939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.277989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.278152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.278188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.278338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.278374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.278564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.278622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.278866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.278914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.279224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.279283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.279560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.279626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.279936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.280004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.280280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.280323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.280523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.280581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.280772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.280828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.281020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.281078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.281252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.281287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.281519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.281590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.281837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.281903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.282167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.282235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.282541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.282589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.282872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.282946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.283192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.283229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.283410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.283463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.283634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.283696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.283901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.283968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.284250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.284298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.284570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.284620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.284914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.284979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.285196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.285246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.285465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.285527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.285734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.285794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.286023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.286081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.286284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.286318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.286524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.286591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.725 [2024-07-24 20:24:24.286874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.725 [2024-07-24 20:24:24.286921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.725 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.287195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.287243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.287485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.287534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.287753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.287803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.288046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.288095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.288345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.288393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.288595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.288653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.288996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.289055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.289383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.289419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.289592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.289651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.289869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.289917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.290119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.290185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.290415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.290475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.290680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.290753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.290979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.291027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.291249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.291289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.291462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.291502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.291641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.291676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.291936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.291984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.292176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.292225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.292523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.292572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.292830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.292879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.293075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.293125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.293342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.293378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.293565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.293623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.293883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.293939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.294195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.294262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.294511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.294589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.294859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.294924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.295141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.295189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.295410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.295467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.295665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.295748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.295967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.296025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.296230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.296266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.296491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.296526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.296707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.296744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.297003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.297051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.297287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.726 [2024-07-24 20:24:24.297335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.726 qpair failed and we were unable to recover it. 00:30:20.726 [2024-07-24 20:24:24.297554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.297603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.297826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.297902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.298130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.298194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.298367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.298405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.298617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.298685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.298951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.299017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.299247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.299281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.299495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.299553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.299765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.299814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.300036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.300097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.300332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.300380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.300752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.300862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.301202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.301269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.301598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.301636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.301846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.301912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.302240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.302308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.302560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.302597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.302803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.302839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.302978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.303013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.303197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.303270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.303575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.303613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.303904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.303981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.304322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.304386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.304611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.304649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.304834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.304869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.305060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.305097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.305309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.305346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.305542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.305580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.305815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.305851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.306117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.306182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.306502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.306541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.306730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.306767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.307028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.307063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.307276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.307314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.307531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.307570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.307767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.307803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.308028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.308092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.308416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.727 [2024-07-24 20:24:24.308510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.727 qpair failed and we were unable to recover it. 00:30:20.727 [2024-07-24 20:24:24.308692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.308727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.308914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.308951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.309136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.309174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.309367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.309403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.309600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.309645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.309827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.309862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.310143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.310217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.310523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.310560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.310714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.310756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.310942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.310985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.311199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.311235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.311413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.311461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.311634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.311670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.311899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.311941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.312274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.312341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.312598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.312639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.312868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.312903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.313175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.313242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.313518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.313555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.313735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.313800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.314114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.314178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.314403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.314451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.314608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.314643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.314818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.314854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.315045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.315081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.315340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.315406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.315629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.315665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.315877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.315943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.316321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.316387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.316706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.316743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.316923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.316958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.317167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.317202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.317482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.317520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.317699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.317742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.318013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.318077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.318338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.318408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.318695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.318736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.728 [2024-07-24 20:24:24.318989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.728 [2024-07-24 20:24:24.319026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.728 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.319234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.319271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.319447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.319483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.319661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.319702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.319907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.319942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.320242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.320311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.320572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.320609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.320818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.320855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.321038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.321074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.321274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.321311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.321515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.321552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.321709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.321784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.322125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.322200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.322534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.322572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.322739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.322782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.322999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.323034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.323263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.323340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.323595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.323633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.323832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.323898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.324163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.324229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.324503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.324546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.324761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.324796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.325012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.325048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.325220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.325257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.325506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.325543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.325769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.325805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.326108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.326173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.326466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.326535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.326824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.326859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.327049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.327087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.327271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.327307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.327526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.327563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.729 [2024-07-24 20:24:24.327770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.729 [2024-07-24 20:24:24.327807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.729 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.328103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.328177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.328446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.328518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.328723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.328759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.328970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.329005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.329214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.329277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.329573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.329640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.329908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.329983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.330315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.330351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.330630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.330666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.330953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.331017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.331319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.331383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.331642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.331677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.331909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.331973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.332292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.332356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.332618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.332682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.332995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.333029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.333288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.333352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.333613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.333678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.333916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.333980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.334242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.334276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.334522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.334588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.334825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.334890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.335165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.335227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.335500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.335536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.335773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.335837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.336140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.336203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.336491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.336557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.336865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.336900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.337188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.337251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.337521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.337587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.337891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.337955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.338265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.338300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.338554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.338619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.338900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.338963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.339223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.339286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.339597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.339633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.339916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.339981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.730 qpair failed and we were unable to recover it. 00:30:20.730 [2024-07-24 20:24:24.340261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.730 [2024-07-24 20:24:24.340324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.340624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.340689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.340966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.341001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.341279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.341342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.341623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.341688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.341994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.342058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.342336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.342370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.342572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.342609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.342822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.342885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.343146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.343220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.343496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.343532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.343779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.343843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.344154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.344218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.344487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.344553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.344857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.344892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.345167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.345230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.345543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.345609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.345881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.345944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.346232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.346265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.346465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.346499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.346698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.346731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.346934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.346967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.347168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.347201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.347439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.347504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.347709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.347742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.347954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.347986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.348160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.348193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.348386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.348419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.348668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.348702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.348919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.348952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.349169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.349201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.349411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.349453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.349623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.349656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.349845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.349877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.350080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.350112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.350361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.731 [2024-07-24 20:24:24.350393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:20.731 [2024-07-24 20:24:24.350392] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:20.731 qpair failed and we were unable to recover it. 00:30:20.731 [2024-07-24 20:24:24.350444] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:20.731 [2024-07-24 20:24:24.350466] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:20.731 [2024-07-24 20:24:24.350481] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:20.731 [2024-07-24 20:24:24.350495] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:20.731 [2024-07-24 20:24:24.350610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.350664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 [2024-07-24 20:24:24.350613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.350708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:20.732 [2024-07-24 20:24:24.350783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:20.732 [2024-07-24 20:24:24.350787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:20.732 [2024-07-24 20:24:24.350877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.350911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.351126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.351161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.351402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.351443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.351640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.351674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.351877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.351910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.352144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.352177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.352388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.352421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.352653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.352687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.352888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.352921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.353111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.353145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.353315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.353348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.353598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.353632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.353818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.353852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.353991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.354025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.354167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.354200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.354382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.354416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.354691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.354725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.354942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.354975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.355225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.355259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.355461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.355494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.355703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.355736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.355945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.355978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.356176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.356215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.356418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.356458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.356641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.356674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.356847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.356880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.357082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.357116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.357320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.357353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.357551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.357585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.357786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.357820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.357977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.358011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.358212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.358245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.358503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.358537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.358785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.358818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.358989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.732 [2024-07-24 20:24:24.359025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.732 qpair failed and we were unable to recover it. 00:30:20.732 [2024-07-24 20:24:24.359226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.359259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.359499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.359533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.359759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.359792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.359968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.360001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.360130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.360163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.360361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.360395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.360578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.360612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.360785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.360817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.361015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.361048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.361185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.361219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.361404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.361445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.361699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.361733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.361899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.361932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.362139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.362173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.362386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.362420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.362664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.362698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.362904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.362938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.363138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.363171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.363346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.363380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.363590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.363623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.363825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.363858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.364075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.364109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.364313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.364346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.364614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.364651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.364859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.364892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.365096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.365130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.365350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.365384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.365546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.365586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.365809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.365842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.365972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.366005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.366172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.366205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.366334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.366367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.366528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.366563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.733 qpair failed and we were unable to recover it. 00:30:20.733 [2024-07-24 20:24:24.366698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.733 [2024-07-24 20:24:24.366731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.366864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.366897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.367094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.367127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.367329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.367362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.367561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.367595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.367777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.367811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.367966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.368000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.368208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.368241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.368501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.368535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.368791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.368824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.369032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.369066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.369234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.369267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.369481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.369516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.369672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.369706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.369913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.369946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.370144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.370177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.370342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.370375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.370558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.370592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.370770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.370804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.370981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.371014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.371214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.371247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.371460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.371494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.371695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.371728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.371952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.371986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.372161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.372194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.372355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.372388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.372566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.372600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.372771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.372805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.373011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.373044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.373300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.373333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.373492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.373526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.373724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.373758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.373972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.374005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.374220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.374253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.374414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.374468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.374650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.374683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.734 [2024-07-24 20:24:24.374857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.734 [2024-07-24 20:24:24.374890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.734 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.375087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.375120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.375323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.375356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.375506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.375540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.375740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.375773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.376033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.376066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.376271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.376304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.376517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.376551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.376755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.376788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.376959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.376992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.377176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.377209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.377377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.377410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.377641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.377676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.377873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.377906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.378079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.378112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.378296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.378330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.378533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.378567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.378780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.378813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.379024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.379058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.379257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.379290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.379499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.379533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.379731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.379765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.379963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.379997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.380173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.380206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.380410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.380451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.380654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.380712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.380934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.380969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.381195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.381228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.381418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.381461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.381640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.381673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.381876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.381909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.382085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.382118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.382318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.382350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.382524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.382557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.382730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.382763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.382959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.382992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.383163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.383196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.383368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.383401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.735 [2024-07-24 20:24:24.383618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.735 [2024-07-24 20:24:24.383658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.735 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.383844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.383877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.384062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.384095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.384283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.384316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.384477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.384511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.384714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.384747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.384996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.385028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.385228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.385260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.385434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.385467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.385670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.385703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.385887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.385920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.386100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.386132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.386326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.386359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.386524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.386557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.386689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.386722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.386905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.386937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.387118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.387151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.387360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.387392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.387576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.387609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.387895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.387928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.388219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.388252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.388494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.388528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.388757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.388790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.389000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.389033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.389208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.389240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.389426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.389466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.389727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.389760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.389902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.389935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.390134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.390167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.390343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.390376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.390581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.390614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.390764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.390797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.390934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.390966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.391132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.391165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.391368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.391401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.391611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.391644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.391824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.391857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.391999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.736 [2024-07-24 20:24:24.392032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.736 qpair failed and we were unable to recover it. 00:30:20.736 [2024-07-24 20:24:24.392187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.392220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.392392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.392424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.392602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.392635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.392851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.392884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.393111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.393143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.393324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.393357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.393538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.393572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.393771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.393804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.394034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.394066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.394293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.394325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.394544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.394577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.394785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.394818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.395024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.395057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.395235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.395268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.395476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.395510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.395758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.395791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.396043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.396076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.396212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.396245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.396412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.396464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.396627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.396661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.396826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.396859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.397065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.397098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.397247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.397280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.397483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.397517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.397766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.397799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.398021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.398054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.398255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.398288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.398465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.398498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.398637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.398669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.398867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.398905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.399157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.399190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.399391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.399423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.399632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.399665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.399883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.399916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.400057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.400090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.400290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.400323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.400558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.400592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.737 [2024-07-24 20:24:24.400796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.737 [2024-07-24 20:24:24.400828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.737 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.401056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.401089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.401268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.401301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.401511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.401545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.401746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.401779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.401976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.402009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.402212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.402245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.402457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.402491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.402694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.402727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.402924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.402957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.403129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.403161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.403323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.403356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.403555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.403588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.403789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.403822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.404030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.404063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.404282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.404314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.404487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.404520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.404727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.404760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.404965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.404998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.405174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.405208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.405420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.405461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.405705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.405738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.405948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.405981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.406155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.406187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.406386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.406419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.406628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.406661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.406865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.406897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.407129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.407162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.407306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.407339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.407501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.407535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.407670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.407702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.407902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.738 [2024-07-24 20:24:24.407935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.738 qpair failed and we were unable to recover it. 00:30:20.738 [2024-07-24 20:24:24.408136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.408180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.408353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.408386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.408590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.408624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.408827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.408859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.409058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.409091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.409252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.409285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.409425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.409464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.409674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.409707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.409882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.409915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.410091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.410124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.410297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.410330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.410539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.410573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.410774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.410807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.410983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.411016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.411223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.411256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.411472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.411506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.411727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.411759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.411949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.411982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.412142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.412175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.412381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.412413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.412545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.412577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.412749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.412784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.412957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.412989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.413190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.413222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.413442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.413475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.413642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.413675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.413846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.413878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.414094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.414126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.414383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.414416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.414586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.414619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.414834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.414867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.415038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.415070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.415267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.415300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.415499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.415533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.415735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.415768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.415938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.415971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.416143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.416175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.739 [2024-07-24 20:24:24.416374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.739 [2024-07-24 20:24:24.416406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.739 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.416630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.416663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.416871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.416903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.417100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.417138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.417342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.417375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.417562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.417595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.417764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.417796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.417998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.418031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.418237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.418269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.418457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.418490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.418699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.418732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.418927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.418959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.419173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.419206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.419464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.419497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.419675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.419708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.419912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.419945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.420081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.420113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.420320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.420352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.420607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.420641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.420868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.420900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.421109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.421142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.421373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.421406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.421613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.421645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.421807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.421839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.422036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.422068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.422329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.422361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.422544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.422577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.422771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.422804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.422993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.423025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.423226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.423258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.423445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.423479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.423677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.423709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.423888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.423921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.424126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.424159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.424366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.424398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.424611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.424645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.424816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.424849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.425060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.425093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.740 qpair failed and we were unable to recover it. 00:30:20.740 [2024-07-24 20:24:24.425254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.740 [2024-07-24 20:24:24.425286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.425448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.425481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.425612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.425645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.425807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.425848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.426012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.426044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.426190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.426227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.426446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.426479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.426659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.426692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.426865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.426897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.427071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.427104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.427275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.427307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.427477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.427510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.427675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.427707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.427848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.427880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.428090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.428123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.428298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.428331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.428508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.428541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.428709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.428742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.428957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.428990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.429142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.429174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.429347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.429379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.429588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.429621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.429804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.429837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.429988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.430020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.430192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.430224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.430404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.430455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.430632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.430665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.430796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.430829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.431025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.431058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.431258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.431299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.431442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.431475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.431657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.431690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.431899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.431932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.432115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.432148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.432345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.432377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.432559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.432592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.432750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.432783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.432983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.741 [2024-07-24 20:24:24.433016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.741 qpair failed and we were unable to recover it. 00:30:20.741 [2024-07-24 20:24:24.433214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.433247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.433407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.433448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.433645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.433679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.433896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.433928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.434113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.434145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.434345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.434378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.434583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.434617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.434817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.434855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.435060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.435092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.435272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.435304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.435463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.435497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.435670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.435702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.435896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.435929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.436120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.436153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.436348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.436381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.436633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.436667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.436874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.436909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.437107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.437140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.437317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.437350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.437561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.437595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.437794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.437827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.438094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.438126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.438326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.438358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.438494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.438527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.438791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.438824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.439023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.439055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.439263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.439296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.439445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.439479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.439680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.439713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.439875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.439907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.440083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.440116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.440290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.440322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.440492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.440525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.440726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.440759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.441018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.441051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.441254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.441287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.441474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.441508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.742 [2024-07-24 20:24:24.441648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.742 [2024-07-24 20:24:24.441681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.742 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.441864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.441897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.442107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.442139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.442341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.442374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.442592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.442626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.442834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.442867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.443016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.443049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.443246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.443279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.443487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.443521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.443693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.443726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.443893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.443931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.444103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.444137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.444332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.444365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.444573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.444607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.444780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.444814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.444988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.445021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.445230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.445263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.445420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.445460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.445635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.445668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.445845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.445877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.446046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.446078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.446250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.446283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.446473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.446507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.446688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.446721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.446918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.446950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.447163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.447195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.447442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.447475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.447649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.447682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.447854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.447886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.448054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.448087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.448259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.448291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.448499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.743 [2024-07-24 20:24:24.448532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.743 qpair failed and we were unable to recover it. 00:30:20.743 [2024-07-24 20:24:24.448700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.448733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.448929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.448961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.449160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.449192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.449393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.449426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.449633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.449665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.449840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.449873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.450034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.450066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.450230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.450263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.450463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.450497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.450700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.450732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.450935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.450967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.451133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.451166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.451362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.451394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.451653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.451688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.451849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.451893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.452055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.452088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.452303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.452335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.452461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.452494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.452659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.452701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.452918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.452951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.453149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.453181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.453391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.453423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.453605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.453638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.453837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.453869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.454041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.454074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.454239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.454272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.454471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.454503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.454660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.454693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.454864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.454897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.455096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.455129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.455328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.455361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.455534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.455568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.455773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.455805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.456004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.456036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.456222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.456255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.456450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.456484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.456668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.456701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.744 [2024-07-24 20:24:24.456909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.744 [2024-07-24 20:24:24.456942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.744 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.457117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.457149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.457342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.457374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.457559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.457592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.457795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.457827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.458040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.458072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.458203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.458235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.458440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.458473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.458647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.458680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.458879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.458911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.459079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.459112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.459275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.459307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.459505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.459539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.459745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.459777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.459972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.460005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.460173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.460206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.460465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.460498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.460705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.460737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.460914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.460947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.461144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.461177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.461370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.461403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.461609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.461647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.461782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.461814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.461984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.462017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.462192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.462224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.462396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.462446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.462618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.462650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.462855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.462888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.463050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.463083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.463279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.463311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.463510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.463543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.463687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.463720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.463976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.464008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.464206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.464239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.464444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.464478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.464614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.464646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.464814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.464847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.465054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.465086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.745 qpair failed and we were unable to recover it. 00:30:20.745 [2024-07-24 20:24:24.465263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.745 [2024-07-24 20:24:24.465296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.465476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.465509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.465695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.465727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.465899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.465932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.466110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.466143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.466321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.466353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.466521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.466555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.466727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.466760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.466922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.466955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.467088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.467120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.467305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.467338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.467482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.467515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.467736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.467769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.467977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.468009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.468171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.468203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.468409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.468449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.468707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.468739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.468940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.468972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.469145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.469178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.469371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.469403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.469615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.469648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.469825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.469858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.470011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.470043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.470252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.470289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.470475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.470508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.470705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.470738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.470935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.470967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.471220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.471252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.471450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.471484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.471652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.471684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.471826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.471858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.472063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.472096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.472300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.472333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.472505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.472538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.472740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.472773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.472937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.472969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.473138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.473170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.473374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.746 [2024-07-24 20:24:24.473407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.746 qpair failed and we were unable to recover it. 00:30:20.746 [2024-07-24 20:24:24.473670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.473703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.473903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.473935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.474111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.474144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.474322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.474355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.474533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.474567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.474737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.474770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.474967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.475000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.475201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.475233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.475405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.475444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.475608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.475640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.475825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.475858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.476044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.476076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.476260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.476292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.476493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.476526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.476674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.476707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.476881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.476913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.477124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.477157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.477331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.477364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.477535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.477568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.477735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.477768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.477971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.478003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.478200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.478232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.478375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.478408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.478624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.478657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.478851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.478884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.479083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.479121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.479295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.479328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.479471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.479505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.479670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.479703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.479847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.479879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.480139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.480172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.480370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.747 [2024-07-24 20:24:24.480402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.747 qpair failed and we were unable to recover it. 00:30:20.747 [2024-07-24 20:24:24.480621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.748 [2024-07-24 20:24:24.480655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:20.748 qpair failed and we were unable to recover it. 00:30:20.748 [2024-07-24 20:24:24.480862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.748 [2024-07-24 20:24:24.480894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.027 qpair failed and we were unable to recover it. 00:30:21.027 [2024-07-24 20:24:24.481043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.027 [2024-07-24 20:24:24.481076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.027 qpair failed and we were unable to recover it. 00:30:21.027 [2024-07-24 20:24:24.481288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.027 [2024-07-24 20:24:24.481321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.027 qpair failed and we were unable to recover it. 00:30:21.027 [2024-07-24 20:24:24.481524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.027 [2024-07-24 20:24:24.481557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.027 qpair failed and we were unable to recover it. 00:30:21.027 [2024-07-24 20:24:24.481767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.027 [2024-07-24 20:24:24.481800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.027 qpair failed and we were unable to recover it. 00:30:21.027 [2024-07-24 20:24:24.481951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.027 [2024-07-24 20:24:24.481984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.027 qpair failed and we were unable to recover it. 00:30:21.027 [2024-07-24 20:24:24.482199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.027 [2024-07-24 20:24:24.482232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.027 qpair failed and we were unable to recover it. 00:30:21.027 [2024-07-24 20:24:24.482401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.027 [2024-07-24 20:24:24.482439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.027 qpair failed and we were unable to recover it. 00:30:21.027 [2024-07-24 20:24:24.482649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.027 [2024-07-24 20:24:24.482681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.027 qpair failed and we were unable to recover it. 00:30:21.027 [2024-07-24 20:24:24.482850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.027 [2024-07-24 20:24:24.482883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.027 qpair failed and we were unable to recover it. 00:30:21.027 [2024-07-24 20:24:24.483093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.027 [2024-07-24 20:24:24.483126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.027 qpair failed and we were unable to recover it. 00:30:21.027 [2024-07-24 20:24:24.483323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.027 [2024-07-24 20:24:24.483355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.027 qpair failed and we were unable to recover it. 00:30:21.027 [2024-07-24 20:24:24.483531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.027 [2024-07-24 20:24:24.483565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.027 qpair failed and we were unable to recover it. 00:30:21.027 [2024-07-24 20:24:24.483768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.027 [2024-07-24 20:24:24.483801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.027 qpair failed and we were unable to recover it. 00:30:21.027 [2024-07-24 20:24:24.484001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.027 [2024-07-24 20:24:24.484033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.027 qpair failed and we were unable to recover it. 00:30:21.027 [2024-07-24 20:24:24.484203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.027 [2024-07-24 20:24:24.484235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.027 qpair failed and we were unable to recover it. 00:30:21.027 [2024-07-24 20:24:24.484444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.027 [2024-07-24 20:24:24.484478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.027 qpair failed and we were unable to recover it. 00:30:21.027 [2024-07-24 20:24:24.484686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.027 [2024-07-24 20:24:24.484718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.027 qpair failed and we were unable to recover it. 00:30:21.027 [2024-07-24 20:24:24.484889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.027 [2024-07-24 20:24:24.484922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.027 qpair failed and we were unable to recover it. 00:30:21.027 [2024-07-24 20:24:24.485123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.027 [2024-07-24 20:24:24.485156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.027 qpair failed and we were unable to recover it. 00:30:21.027 [2024-07-24 20:24:24.485359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.027 [2024-07-24 20:24:24.485392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.027 qpair failed and we were unable to recover it. 00:30:21.027 [2024-07-24 20:24:24.485583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.027 [2024-07-24 20:24:24.485617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.027 qpair failed and we were unable to recover it. 00:30:21.027 [2024-07-24 20:24:24.485823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.485856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.486033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.486066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.486240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.486273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.486459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.486493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.486752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.486785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.486998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.487031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.487216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.487249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.487418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.487459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.487659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.487692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.487899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.487932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.488101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.488139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.488343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.488376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.488584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.488617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.488802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.488835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.489021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.489054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.489229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.489261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.489424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.489465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.489664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.489697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.489896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.489928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.490102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.490135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.490280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.490313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.490507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.490541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.490741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.490774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.490924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.490957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.491160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.491193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.491387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.491420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.491642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.491674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.491849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.491882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.492078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.492111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.492278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.492311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.492513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.492546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.492737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.492769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.492934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.492967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.493135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.493168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.493302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.028 [2024-07-24 20:24:24.493334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.028 qpair failed and we were unable to recover it. 00:30:21.028 [2024-07-24 20:24:24.493533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.493566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.493747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.493779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.493935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.493968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.494150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.494183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.494357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.494390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.494603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.494636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.494835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.494868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.495071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.495104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.495312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.495345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.495553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.495587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.495755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.495787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.495969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.496002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.496156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.496189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.496392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.496425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.496630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.496663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.496859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.496899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.497101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.497134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.497276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.497308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.497510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.497543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.497747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.497780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.497989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.498022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.498233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.498266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.498444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.498477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.498658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.498691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.498867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.498913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.499133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.499167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.499337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.499371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.499559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.499606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.499814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.499860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.500040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.500086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.500323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.500370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.500586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.500632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.500903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.500940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.501184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.501219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.501437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.501484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.029 [2024-07-24 20:24:24.501719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.029 [2024-07-24 20:24:24.501765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.029 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.501975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.502022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.502229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.502276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.502509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.502557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.502790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.502825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.502984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.503017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.503222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.503255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.503446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.503494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.503728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.503773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.503983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.504030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.504266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.504311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.504508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.504548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.504749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.504784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.504952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.504985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.505187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.505231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.505444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.505491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.505698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.505743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.505970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.506016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.506253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.506301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.506503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.506544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.506762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.506802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.506980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.507013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.507217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.507263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.507493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.507541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.507782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.507827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.508029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.508075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.508306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.508342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.508542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.508577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.508746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.508778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.508981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.509026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.509260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.509306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.509505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.509552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.509761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.509805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.510037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.510085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.510312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.510346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.030 [2024-07-24 20:24:24.510519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.030 [2024-07-24 20:24:24.510553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.030 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.510766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.510799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.510997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.511043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.511274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.511320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.511554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.511602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.511845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.511893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.512128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.512163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.512351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.512385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.512565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.512599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.512795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.512840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.513074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.513120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.513327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.513373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.513618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.513674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.513909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.513944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.514113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.514146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.514350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.514391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.514665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.514712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.514962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.515008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.515215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.515260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.515497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.515546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.515764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.515799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.515973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.516006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.516208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.516253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe95c000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.516499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.516553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.516771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.516807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.516986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.517022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.517244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.517279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.517490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.517526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.517700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.517741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.517930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.517965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.518179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.518213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.518402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.518452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.518677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.518711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.518914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.518948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.519101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.519135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.519384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.519418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.519654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.031 [2024-07-24 20:24:24.519696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.031 qpair failed and we were unable to recover it. 00:30:21.031 [2024-07-24 20:24:24.519916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.519949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.520158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.520192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.520367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.520401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.520564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.520599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.520848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.520885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.521069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.521103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.521303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.521337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.521513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.521548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.521740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.521775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.521995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.522029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.522246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.522279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.522498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.522534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.522744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.522785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.522964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.522998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.523196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.523230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.523444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.523485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.523676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.523711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.523914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.523948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.524168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.524202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.524382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.524417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.524634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.524668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.524856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.524891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.525094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.525128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.525313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.525346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.525521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.525557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.525770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.525804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.032 qpair failed and we were unable to recover it. 00:30:21.032 [2024-07-24 20:24:24.526028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.032 [2024-07-24 20:24:24.526063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.526254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.526293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.526482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.526517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.526706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.526741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.526916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.526949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.527114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.527152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.527355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.527389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.527575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.527611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.527815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.527848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.528061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.528096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.528275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.528318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.528480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.528514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.528716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.528757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.528964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.528997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.529169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.529203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.529401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.529453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.529687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.529722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.529907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.529942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.530148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.530184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.530336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.530370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.530572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.530615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.530805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.530839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.531023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.531059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.531236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.531269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.531453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.531488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.531638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.531672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.531856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.531890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.532090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.532133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.532336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.532370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.532583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.532624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.532827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.532865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.533020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.533053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.533253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.533295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.533501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.533535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.533736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.533771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.533947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.033 [2024-07-24 20:24:24.533986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.033 qpair failed and we were unable to recover it. 00:30:21.033 [2024-07-24 20:24:24.534214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.534249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.534418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.534461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.534712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.534747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.534937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.534971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.535151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.535191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.535384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.535417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.535595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.535632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.535849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.535883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.536066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.536100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.536251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.536285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.536473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.536508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.536713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.536754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.536981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.537015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.537223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.537257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.537442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.537478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.537640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.537674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.537843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.537878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.538071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.538105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.538286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.538329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.538502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.538537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.538751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.538785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.538988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.539027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.539199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.539232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.539447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.539485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.539690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.539723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.539948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.539983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.540174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.540208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.540421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.540468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.540652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.540687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.540897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.540931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.541123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.541157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.541357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.541397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.541581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.541615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.541796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.541836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.542036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.542069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.034 [2024-07-24 20:24:24.542258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.034 [2024-07-24 20:24:24.542292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.034 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.542497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.542540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.542729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.542762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.542938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.542972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.543179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.543220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.543447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.543482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.543651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.543688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.543851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.543892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.544075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.544109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.544285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.544318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.544496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.544531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.544749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.544789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.544992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.545027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.545228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.545262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.545468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.545503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.545718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.545753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.545962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.545996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.546196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.546230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.546411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.546465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.546651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.546692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.546906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.546940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.547140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.547175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.547354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.547387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.547578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.547614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.547781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.547823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.548019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.548054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.548254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.548288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.548464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.548499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.548704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.548738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.548925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.548958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.549170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.549204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.549380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.549414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.549592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.549627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.549805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.549840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.550047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.550080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.550254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.550288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.550501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.550538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.035 [2024-07-24 20:24:24.550748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.035 [2024-07-24 20:24:24.550782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.035 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.550991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.551031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.551253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.551294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.551494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.551529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.551672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.551712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.551915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.551948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.552082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.552116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.552289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.552322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.552479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.552515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.552691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.552725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.552929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.552963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.553172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.553211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.553394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.553436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.553617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.553651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.553833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.553867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.554063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.554098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.554299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.554339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.554522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.554556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.554730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.554764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.554973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.555007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.555225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.555259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.555471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.555506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.555726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.555760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.555932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.555967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.556145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.556185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.556401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.556447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.556638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.556673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.556878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.556912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.557102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.557137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.557311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.557343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.557487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.557523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.557695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.557729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.557881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.557915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.558056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.558090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.558289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.558324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.036 [2024-07-24 20:24:24.558501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.036 [2024-07-24 20:24:24.558543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.036 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.558694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.558729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.558934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.558975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.559148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.559182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.559400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.559446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.559665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.559698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.559885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.559924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.560088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.560127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.560308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.560342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.560517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.560557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.560732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.560765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.560937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.560977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.561179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.561212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.561404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.561449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.561667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.561708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.561862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.561895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.562063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.562097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.562270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.562305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.562460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.562495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.562635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.562669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.562880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.562914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.563067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.563100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.563241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.563277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.563490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.563525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.563756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.563790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.563926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.563962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.564179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.564213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.564388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.564422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.564639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.564675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.564854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.037 [2024-07-24 20:24:24.564888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.037 qpair failed and we were unable to recover it. 00:30:21.037 [2024-07-24 20:24:24.565089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.565129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.565344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.565377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.565529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.565570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.565765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.565799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.565975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.566009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.566141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.566174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.566375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.566409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.566589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.566623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.566849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.566883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.567059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.567097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.567309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.567342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.567530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.567566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.567743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.567783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.567975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.568011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.568178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.568212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.568352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.568385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.568574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.568621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.568837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.568870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.569052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.569086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.569225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.569258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.569445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.569480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.569647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.569680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.569874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.569908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.570118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.570156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.570361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.570394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.570614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.570649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.570820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.570861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.571068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.571102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.571304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.571338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.571537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.571572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.571757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.571792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.571969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.572004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.572193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.572227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.572396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.572439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.572666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.572700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.572861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.038 [2024-07-24 20:24:24.572896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.038 qpair failed and we were unable to recover it. 00:30:21.038 [2024-07-24 20:24:24.573101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.573136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.573324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.573358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.573542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.573585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.573803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.573836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.574044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.574078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.574280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.574323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.574532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.574566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.574778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.574814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.575028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.575063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.575282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.575316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.575492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.575532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.575740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.575774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.575942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.575976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.576178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.576218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.576365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.576399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.576620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.576656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.576868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.576901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.577077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.577112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.577279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.577312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.577526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.577561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.577716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.577762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.577977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.578012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.578217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.578252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.578465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.578504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.578727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.578761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.578904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.578944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.579099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.579132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.579284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.579321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.579519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.579558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.579769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.579804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.579989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.580024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.580211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.580246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.580434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.580477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.580690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.580723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.580901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.580936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.581126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.581159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.039 [2024-07-24 20:24:24.581327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.039 [2024-07-24 20:24:24.581361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.039 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.581566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.581605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.581826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.581860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.582072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.582107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.582288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.582322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.582515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.582550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.582765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.582803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.582979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.583012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.583185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.583220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.583389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.583422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.583613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.583647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.583856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.583892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.584078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.584112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.584312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.584346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.584546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.584582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.584766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.584800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.584970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.585003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.585192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.585226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.585413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.585461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.585650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.585684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.585868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.585902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.586060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.586093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.586294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.586329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.586497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.586532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.586705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.586745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.586957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.586996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.587159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.587199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.587382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.587416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.587594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.587627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.587828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.587861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.588074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.588107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.588277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.588313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.588470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.588504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.588677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.588710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.588908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.588940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.589148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.589181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.589359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.589392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.589601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.040 [2024-07-24 20:24:24.589635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.040 qpair failed and we were unable to recover it. 00:30:21.040 [2024-07-24 20:24:24.589815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.589849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.590051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.590084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.590283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.590316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.590456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.590491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.590692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.590725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.590913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.590946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.591129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.591170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.591334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.591367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.591537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.591571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.591781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.591814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.592032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.592066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.592267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.592300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.592475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.592510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.592682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.592716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.592884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.592917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.593096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.593129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.593332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.593364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.593578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.593612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.593791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.593824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.594033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.594065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.594244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.594277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.594461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.594495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.594678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.594710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.594870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.594903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.595065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.595098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.595296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.595329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.595535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.595575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.595773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.595806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.595962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.595995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.596122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.596155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.596363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.596396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.596611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.596645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.596777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.596810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.041 [2024-07-24 20:24:24.597011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.041 [2024-07-24 20:24:24.597044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.041 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.597213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.597246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.597420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.597465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.597676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.597708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.597895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.597928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.598130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.598163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.598337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.598369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.598596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.598630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.598832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.598866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.599011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.599044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.599243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.599276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.599475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.599510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.599707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.599740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.599938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.599971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.600134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.600167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.600366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.600399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.600599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.600633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.600808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.600840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.601039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.601072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.601264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.601297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.601450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.601484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.601657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.601690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.601892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.601925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.602099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.602132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.602301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.602334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.602511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.602545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.602749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.042 [2024-07-24 20:24:24.602781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.042 qpair failed and we were unable to recover it. 00:30:21.042 [2024-07-24 20:24:24.602955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.602988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.603130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.603162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.603302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.603335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.603513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.603548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.603724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.603757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.603928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.603961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.604129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.604168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.604318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.604351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.604552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.604586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.604787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.604821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.605028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.605061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.605259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.605292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.605465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.605498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.605718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.605750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.605951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.605984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.606191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.606223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.606451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.606486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.606652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.606685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.606866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.606899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.607096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.607129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.607355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.607388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.607612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.607647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.607790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.607823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.607996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.608030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.608174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.608207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.608378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.608411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.608632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.608665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.608848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.608881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.609061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.609094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.609236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.609268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.609443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.609477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.609676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.609709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.609880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.609912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.610126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.610159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.610354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.610387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.043 [2024-07-24 20:24:24.610606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.043 [2024-07-24 20:24:24.610640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.043 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.610804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.610837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.610968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.611001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.611198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.611231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.611443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.611478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.611675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.611709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.611884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.611917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.612117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.612150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.612358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.612392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.612625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.612659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.612805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.612838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.613004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.613042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.613241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.613274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.613446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.613481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.613646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.613679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.613825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.613857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.614057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.614090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.614306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.614339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.614522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.614556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.614758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.614791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.614976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.615008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.615210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.615243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.615453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.615486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.615663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.615696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.615868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.615901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.616085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.616118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.616312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.616345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.616542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.616576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.616775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.616808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.616981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.617013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.617184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.617217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.617351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.617385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.617567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.617600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.617815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.617848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.617993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.618026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.044 qpair failed and we were unable to recover it. 00:30:21.044 [2024-07-24 20:24:24.618232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.044 [2024-07-24 20:24:24.618264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.618445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.618478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.618646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.618679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.618883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.618917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.619091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.619124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.619263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.619295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.619481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.619515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.619688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.619723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.619870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.619903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.620075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.620108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.620243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.620276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.620472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.620506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.620706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.620739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.620894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.620927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.621103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.621136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.621300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.621333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.621497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.621536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.621747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.621780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.621952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.621985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.622149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.622182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.622380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.622413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.622623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.622656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.622858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.622891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.623099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.623132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.623304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.623337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.623534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.623568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.623778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.623811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.623980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.624012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.624183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.624216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.624412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.624453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.624675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.624709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.624894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.624933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.625111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.625143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.625353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.625386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.625589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.045 [2024-07-24 20:24:24.625623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.045 qpair failed and we were unable to recover it. 00:30:21.045 [2024-07-24 20:24:24.625798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.625831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.626036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.626069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.626275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.626308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.626490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.626524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.626727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.626759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.626926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.626959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.627172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.627204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.627402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.627452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.627662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.627700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.627835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.627868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.628076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.628110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.628329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.628362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.628566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.628601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.628775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.628807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.628976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.629009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.629146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.629180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.629342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.629375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.629581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.629614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.629808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.629841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.630040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.630072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.630271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.630303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.630503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.630537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.630709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.630742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.630941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.630974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.631140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.631173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.631381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.631419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.631633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.631667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.631867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.631899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.632110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.632143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.632326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.632358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.632566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.632599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.632776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.632809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.633010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.633043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.633251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.633284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.046 [2024-07-24 20:24:24.633450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.046 [2024-07-24 20:24:24.633484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.046 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.633639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.633672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.633867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.633900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.634081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.634114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.634283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.634315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.634518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.634552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.634722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.634755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.634926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.634959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.635130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.635163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.635375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.635409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.635551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.635584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.635759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.635792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.635994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.636027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.636193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.636225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.636437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.636476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.636660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.636693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.636873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.636905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.637107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.637140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.637323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.637355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.637555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.637590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.637797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.637830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.638005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.638037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.638237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.638269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.638480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.638514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.638662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.638695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.638894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.638927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.639104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.639137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.639292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.639325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.639500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.639533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.047 qpair failed and we were unable to recover it. 00:30:21.047 [2024-07-24 20:24:24.639716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.047 [2024-07-24 20:24:24.639748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.639960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.639993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.640199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.640232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.640414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.640454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.640670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.640702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.640854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.640888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.641048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.641091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.641259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.641292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.641443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.641477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.641676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.641709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.641872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.641905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.642112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.642144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.642370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.642403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.642624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.642658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.642864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.642896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.643103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.643136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.643342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.643375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.643564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.643598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.643769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.643802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.643998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.644031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.644234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.644267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.644471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.644505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.644675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.644708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.644912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.644945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.645113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.645146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.645317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.645356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.645555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.645589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.645793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.645826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.645987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.646020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.646164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.646198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.646351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.646387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.646537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.646570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.646711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.048 [2024-07-24 20:24:24.646744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.048 qpair failed and we were unable to recover it. 00:30:21.048 [2024-07-24 20:24:24.646942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.646974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.647145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.647178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.647353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.647386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.647564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.647598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.647744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.647777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.647913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.647946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.648151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.648184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.648394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.648436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.648613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.648646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.648807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.648840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.649035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.649068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.649248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.649281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.649491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.649525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.649710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.649742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.649921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.649954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.650123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.650157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.650334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.650367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.650550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.650584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.650723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.650756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.650962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.650995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.651168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.651201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.651374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.651407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.651627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.651661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.651836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.651869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.652015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.652047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.652246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.652279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.652444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.652477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.652649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.652682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.652849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.652882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.653081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.653114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.653285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.653318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.653513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.653548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.653745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.653786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.653961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.049 [2024-07-24 20:24:24.653994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.049 qpair failed and we were unable to recover it. 00:30:21.049 [2024-07-24 20:24:24.654155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.654187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.654386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.654419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.654589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.654621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.654811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.654844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.655049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.655082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.655252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.655285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.655474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.655508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.655692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.655732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.655907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.655941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.656139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.656172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.656345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.656378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.656569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.656602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.656790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.656823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.656998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.657030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.657242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.657274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.657469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.657503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.657671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.657704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.657902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.657935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.658083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.658116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.658313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.658346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.658552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.658586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.658771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.658804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.658973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.659006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.659209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.659241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.659422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.659472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.659676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.659709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.659917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.659950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.660129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.660161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.660334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.660366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.660564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.660598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.660777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.660809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.660985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.661018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.050 qpair failed and we were unable to recover it. 00:30:21.050 [2024-07-24 20:24:24.661228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.050 [2024-07-24 20:24:24.661261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.661460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.661494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.661705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.661738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.661937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.661970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.662159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.662192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.662389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.662421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.662635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.662673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.662807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.662840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.663015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.663047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.663227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.663260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.663459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.663493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.663701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.663734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.663931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.663963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.664104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.664136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.664349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.664381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.664598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.664631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.664796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.664828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.665003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.665036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.665240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.665272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.665485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.665519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.665728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.665762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.665964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.665996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.666172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.666205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.666394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.666426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.666566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.666599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.666805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.666838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.667011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.667044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.667247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.667280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.667498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.667531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.667733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.051 [2024-07-24 20:24:24.667765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.051 qpair failed and we were unable to recover it. 00:30:21.051 [2024-07-24 20:24:24.667935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.667967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.668168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.668201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.668403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.668443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.668650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.668683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.668883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.668917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.669079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.669112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.669283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.669316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.669475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.669509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.669710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.669743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.669922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.669955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.670126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.670158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.670368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.670400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.670615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.670648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.670866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.670899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.671111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.671144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.671319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.671352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.671488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.671527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.671698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.671731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.671932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.671965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.672166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.672199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.672406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.672445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.672573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.672606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.672763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.672796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.672932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.672972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.673144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.673177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.052 [2024-07-24 20:24:24.673349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.052 [2024-07-24 20:24:24.673381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.052 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.673563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.673597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.673796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.673829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.674004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.674037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.674241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.674274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.674480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.674514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.674687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.674720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.674885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.674918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.675117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.675150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.675310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.675342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.675496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.675530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.675738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.675771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.675949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.675982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.676180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.676213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.676413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.676455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.676656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.676689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.676882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.676915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.677087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.677119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.677325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.677358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.677535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.677568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.677738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.677771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.677969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.678002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.678212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.678244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.678400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.678441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.678626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.678660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.678880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.678913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.679083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.679115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.679251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.679284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.679446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.679479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.679656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.679691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.679875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.679908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.680118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.680156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.680300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.680333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.680546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.680580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.680749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.680788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.680984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.681017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.053 [2024-07-24 20:24:24.681193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.053 [2024-07-24 20:24:24.681225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.053 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.681422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.681462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.681615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.681648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.681822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.681855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.682029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.682061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.682245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.682277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.682462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.682496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.682714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.682746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.682894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.682926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.683110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.683143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.683342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.683375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.683583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.683617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.683821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.683854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.684029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.684062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.684233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.684266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.684444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.684478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.684651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.684683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.684848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.684880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.685079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.685112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.685312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.685345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.685543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.685577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.685730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.685764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.685931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.685964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.686165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.686198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.686377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.686409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.686616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.686649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.686822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.686855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.687065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.687097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.687235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.687268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.687422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.687475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.687690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.687723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.687901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.687934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.688113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.688146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.688285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.688318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.688527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.688560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.688770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.688809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.054 qpair failed and we were unable to recover it. 00:30:21.054 [2024-07-24 20:24:24.688988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.054 [2024-07-24 20:24:24.689022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.689229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.689262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.689446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.689480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.689654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.689687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.689899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.689932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.690110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.690143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.690316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.690348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.690528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.690561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.690770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.690803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.690983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.691015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.691193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.691226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.691400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.691439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.691618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.691651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.691816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.691849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.692046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.692079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.692249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.692282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.692455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.692489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.692689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.692722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.692922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.692955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.693131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.693164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.693312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.693344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.693545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.693579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.693781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.693814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.694014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.694047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.694229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.694262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.694440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.694473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.694622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.694655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.694831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.694864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.695010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.695043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.695235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.695268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.695471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.695505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.695640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.695672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.055 qpair failed and we were unable to recover it. 00:30:21.055 [2024-07-24 20:24:24.695840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.055 [2024-07-24 20:24:24.695873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.696076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.696109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.696254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.696286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.696490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.696523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.696695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.696728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.696899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.696932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.697132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.697165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.697370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.697411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.697653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.697686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.697854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.697887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.698053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.698085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.698273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.698305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.698525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.698559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.698712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.698745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.698924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.698956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.699119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.699151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.699358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.699391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.699571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.699604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.699772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.699805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.699972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.700004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.700203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.700236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.700411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.700450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.700625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.700658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.700832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.700865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.701034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.701067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.701281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.701313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.701537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.701571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.701756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.701794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.701941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.701974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.702136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.702169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.702336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.702369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.702577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.702610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.702782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.056 [2024-07-24 20:24:24.702814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.056 qpair failed and we were unable to recover it. 00:30:21.056 [2024-07-24 20:24:24.703015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.703048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.057 [2024-07-24 20:24:24.703256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.703289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.057 [2024-07-24 20:24:24.703488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.703522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.057 [2024-07-24 20:24:24.703686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.703718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.057 [2024-07-24 20:24:24.703861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.703894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.057 [2024-07-24 20:24:24.704091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.704124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.057 [2024-07-24 20:24:24.704325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.704358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.057 [2024-07-24 20:24:24.704539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.704572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.057 [2024-07-24 20:24:24.704716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.704749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.057 [2024-07-24 20:24:24.704888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.704920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.057 [2024-07-24 20:24:24.705117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.705150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.057 [2024-07-24 20:24:24.705314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.705347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.057 [2024-07-24 20:24:24.705492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.705525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.057 [2024-07-24 20:24:24.705736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.705769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.057 [2024-07-24 20:24:24.705936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.705974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.057 [2024-07-24 20:24:24.706115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.706148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.057 [2024-07-24 20:24:24.706349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.706382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.057 [2024-07-24 20:24:24.706557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.706590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.057 [2024-07-24 20:24:24.706767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.706800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.057 [2024-07-24 20:24:24.707008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.707041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.057 [2024-07-24 20:24:24.707178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.707211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.057 [2024-07-24 20:24:24.707412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.707453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.057 [2024-07-24 20:24:24.707671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.707704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.057 [2024-07-24 20:24:24.707885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.707927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.057 [2024-07-24 20:24:24.708136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.057 [2024-07-24 20:24:24.708168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.057 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.708331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.708363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.708565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.708598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.708810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.708843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.709020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.709052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.709208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.709241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.709418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.709458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.709667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.709699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.709869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.709901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.710080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.710113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.710318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.710350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.710559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.710592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.710769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.710802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.710997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.711029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.711203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.711236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.711457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.711491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.711693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.711726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.711895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.711928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.712088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.712120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.712317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.712350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.712532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.712566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.712745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.712778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.712976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.713009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.713226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.713259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.713439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.713472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.713676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.713708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.713910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.713943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.714083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.714116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.714287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.714320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.714495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.714528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.714688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.058 [2024-07-24 20:24:24.714725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.058 qpair failed and we were unable to recover it. 00:30:21.058 [2024-07-24 20:24:24.714938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.714970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.715165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.715198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.715394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.715433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.715645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.715678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.715876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.715910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.716115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.716147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.716316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.716349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.716495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.716529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.716729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.716762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.716910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.716942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.717154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.717187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.717379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.717414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.717586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.717619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.717836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.717869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.718018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.718051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.718223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.718257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.718457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.718490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.718690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.718723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.718917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.718950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.719160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.719192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.719380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.719418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.719610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.719643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.719768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.719801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.719975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.720008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.720187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.720219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.720381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.720414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.720570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.720603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.720812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.720845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.721043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.721076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.721282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.721315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.721511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.721545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.721716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.721748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.721936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.721968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-24 20:24:24.722095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-24 20:24:24.722129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.722292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.722334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.722529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.722563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.722724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.722756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.722941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.722974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.723176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.723209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.723356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.723394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.723606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.723639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.723811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.723843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.724015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.724048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.724213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.724246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.724416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.724455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.724633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.724666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.724861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.724894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.725101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.725134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.725304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.725337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.725485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.725519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.725702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.725734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.725935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.725968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.726171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.726204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.726377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.726410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.726584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.726616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.726777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.726810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.726949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.726982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.727181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.727213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.727378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.727411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.727602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.727635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.727835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.727868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.728009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.728041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.728215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.728248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.728418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-24 20:24:24.728460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-24 20:24:24.728625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.728658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.728827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.728859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.729066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.729099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.729296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.729329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.729532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.729565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.729697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.729730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.729927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.729960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.730158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.730191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.730391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.730424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.730632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.730665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.730829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.730862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.731021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.731053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.731217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.731249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.731415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.731460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.731641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.731674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.731873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.731911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.732086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.732119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.732298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.732331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.732535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.732568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.732719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.732752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.732949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.732983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.733182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.733215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.733388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.733421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.733635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.733667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.733835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.733867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.734007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.734040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.734239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.734272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.734471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.734504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.734648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.734681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.734875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.734908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.735128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.735160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.735286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.735319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.735520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.735553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.735692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.735724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.735898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.735930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.736134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-24 20:24:24.736166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-24 20:24:24.736364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.736396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.736574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.736607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.736807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.736840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.737048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.737081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.737226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.737258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.737438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.737471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.737668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.737706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.737874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.737907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.738082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.738114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.738295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.738328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.738529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.738562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.738790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.738822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.739008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.739041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.739194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.739227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.739397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.739449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.739654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.739686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.739860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.739893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.740057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.740090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.740262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.740295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.740507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.740541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.740759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.740793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.741004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.741036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.741159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.741192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.741374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.741416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.741596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.741629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.741823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.741856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.742028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.742060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.742211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.742244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.742412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.742453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.742652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-24 20:24:24.742685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-24 20:24:24.742893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.742926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.743145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.743178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.743387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.743420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.743646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.743680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.743861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.743893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.744091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.744123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.744306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.744340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.744499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.744532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.744715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.744747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.744897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.744930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.745117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.745149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.745356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.745388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.745548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.745582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.745794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.745827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.746007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.746039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.746248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.746280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.746418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.746462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.746639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.746671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.746812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.746844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.747044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.747076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.747275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.747308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.747462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.747496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.747708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.747741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.747927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.747960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.748135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.748167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.748359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.748394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.748535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.748569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.748720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.748759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.748929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.748962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-24 20:24:24.749161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-24 20:24:24.749196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.749341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.749375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.749554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.749590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.749795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.749830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.750046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.750081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.750259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.750295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.750448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.750482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.750684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.750727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.750912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.750945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.751114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.751148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.751320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.751353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.751571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.751606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.751796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.751830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.752002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.752035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.752208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.752247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.752455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.752489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.752705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.752739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.752956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.752993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.753154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.753187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.753406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.753451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.753642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.753675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.753907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.753942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.754143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.754177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.754355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.754389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.754606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.754641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.754855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.754893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.755094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.755128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.755329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.755369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.755586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.755623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.755812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.755845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.756025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.756066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.756274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.756307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.756473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.756508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.756720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.756759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.756958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.756994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.757192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.757226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-24 20:24:24.757395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-24 20:24:24.757437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.757598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.757633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.757814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.757847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.758037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.758071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.758244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.758279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.758472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.758507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.758720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.758754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.758970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.759011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.759225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.759259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.759423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.759481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.759661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.759694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.759897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.759931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.760083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.760116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.760289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.760323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.760505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.760544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.760759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.760793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.760977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.761011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.761190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.761223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.761437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.761472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.761663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.761699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.761908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.761942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.762147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.762182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.762352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.762384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.762588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.762623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.762806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.762842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.763039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.763072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.763288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.763322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.763552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.763591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.763809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.763843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.764028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.764062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.764267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.764301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.764542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.764582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.764794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.764828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.764971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.765004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.765204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.765238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.765458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.765493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.765718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.765751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.765896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-24 20:24:24.765937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-24 20:24:24.766148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.766182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.766369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.766403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.766595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.766631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.766859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.766893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.767046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.767080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.767281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.767316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.767546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.767581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.767729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.767764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.767963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.767996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.768204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.768238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.768462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.768497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.768679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.768717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.768924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.768959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.769149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.769182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.769354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.769388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.769604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.769639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.769784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.769818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.769966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.769999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.770178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.770212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.770420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.770465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.770710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.770745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.770920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.770955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.771103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.771139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.771322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.771356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.771546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.771585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.771743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.771777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.771920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.771953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.772155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.772188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.772344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.772377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.772572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.772606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.772821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.772854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.773037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.773070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.773258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.773291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.773506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.773545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.773753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.773786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.773954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.773987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.774171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.774204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.774411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-24 20:24:24.774455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-24 20:24:24.774657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.774690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.774867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.774900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.775099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.775132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.775341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.775374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.775541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.775584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.775757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.775789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.775990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.776023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.776201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.776234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.776416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.776462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.776647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.776680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.776856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.776888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.777092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.777125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.777255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.777288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.777424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.777477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.777649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.777682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.777885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.777918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.778103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.778136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.778322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.778360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.778545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.778579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.778761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.778794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.778994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.779027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.779199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.779232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.779408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.779451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.779658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.779699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.779912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.779945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.780118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.780151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.780330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.780362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.780549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.780584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.780785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.780818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.781029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.781062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.781235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.781268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.781466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-24 20:24:24.781500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-24 20:24:24.781659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.781692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.781874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.781907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.782108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.782140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.782314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.782353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.782542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.782576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.782795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.782828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.783000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.783032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.783240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.783272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.783416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.783458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.783652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.783685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.783894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.783927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.784106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.784138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.784315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.784348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.784537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.784571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.784747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.784779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.784965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.784998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.785161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.785194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.785400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.785457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.785667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.785699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.785869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.785902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.786046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.786079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.786277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.786310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.786491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.786525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.786685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.786727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.786914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.786946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.787131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.787164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.787341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.787374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.787577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.787610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.787763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.787796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.787972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.788004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.788215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.788248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.788390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.788423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.788629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.788662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.788844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.788877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.789079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.789112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.789314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.789346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-24 20:24:24.789527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-24 20:24:24.789560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-24 20:24:24.789740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-24 20:24:24.789773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-24 20:24:24.789947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-24 20:24:24.789980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-24 20:24:24.790174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-24 20:24:24.790206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-24 20:24:24.790392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-24 20:24:24.790425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-24 20:24:24.790647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-24 20:24:24.790680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-24 20:24:24.790831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-24 20:24:24.790864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.791034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.791072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.791284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.791317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.791518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.791551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.791729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.791762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.791969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.792002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.792205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.792238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.792412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.792451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.792652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.792685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.792897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.792930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.793087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.793119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.793324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.793357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.793571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.793604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.793814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.793847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.794001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.794034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.794225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.794257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.794461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.794494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.794697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.794730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.794931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.794964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.795132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.795164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.795361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.795394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.795621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.795654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.795828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.795861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.796071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.796104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.796311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.796343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.796540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.796574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.796777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.796810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.797010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.797043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.797258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.797291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.797437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.348 [2024-07-24 20:24:24.797471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.348 qpair failed and we were unable to recover it. 00:30:21.348 [2024-07-24 20:24:24.797644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.797677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.797847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.797880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.798052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.798085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.798260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.798293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.798489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.798522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.798722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.798754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.798898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.798930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.799097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.799130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.799339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.799372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.799574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.799607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.799747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.799780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.799919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.799957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.800159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.800191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.800371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.800404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.800639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.800672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.800848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.800880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.801053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.801086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.801285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.801318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.801492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.801526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.801702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.801735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.801893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.801925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.802124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.802156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.802296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.802328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.802539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.802572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.802736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.802769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.802982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.803015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.803189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.803222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.803422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.803460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.803662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.803695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.803898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.803930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.804075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.804107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.804284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.804317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.804496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.804530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.804740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.804773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.804896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.804929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.805114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.805146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.805355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.805388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.349 [2024-07-24 20:24:24.805570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.349 [2024-07-24 20:24:24.805603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.349 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.805810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.805844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.806055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.806088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.806267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.806299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.806498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.806532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.806733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.806766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.806933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.806966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.807134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.807167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.807316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.807348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.807522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.807555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.807723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.807757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.807955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.807988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.808125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.808158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.808365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.808398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.808587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.808626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.808826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.808859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.809058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.809091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.809222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.809255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.809467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.809501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.809710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.809743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.809955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.809988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.810193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.810227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.810425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.810465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.810676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.810709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.810883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.810916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.811089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.811122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.811295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.811327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.811512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.811545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.811763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.811796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.811985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.812018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.812191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.812223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.812392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.812424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.812632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.812665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.812867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.812900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.813102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.813135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.813314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.813347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.813554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.813587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.813719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.813751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.350 [2024-07-24 20:24:24.813931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.350 [2024-07-24 20:24:24.813963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.350 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.814143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.814176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.814398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.814437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.814629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.814663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.814866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.814898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.815069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.815102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.815267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.815300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.815469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.815503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.815635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.815668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.815839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.815871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.816064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.816097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.816304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.816337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.816506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.816539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.816747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.816781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.816983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.817017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.817212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.817244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.817453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.817491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.817665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.817698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.817881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.817914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.818118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.818151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.818354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.818387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.818561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.818594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.818791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.818824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.819026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.819058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.819200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.819232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.819407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.819447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.819632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.819665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.819808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.819841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.820011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.820044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.820209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.820241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.820426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.820468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.820659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.820692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.820900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.820933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.821107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.821140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.821320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.821353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.821527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.351 [2024-07-24 20:24:24.821560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.351 qpair failed and we were unable to recover it. 00:30:21.351 [2024-07-24 20:24:24.821729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.821762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.821934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.821967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.822181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.822214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.822394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.822426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.822608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.822641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.822818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.822850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.823055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.823087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.823299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.823332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.823494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.823528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.823733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.823765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.823905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.823938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.824138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.824171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.824321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.824354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.824558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.824592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.824767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.824800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.824986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.825020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.825216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.825249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.825460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.825494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.825652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.825684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.825866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.825899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.826077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.826115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.826273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.826305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.826482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.826516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.826689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.826722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.826905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.826937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.827077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.827111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.827268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.827309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.827507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.827540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.827740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.827773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.827947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.827980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.828184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.352 [2024-07-24 20:24:24.828216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.352 qpair failed and we were unable to recover it. 00:30:21.352 [2024-07-24 20:24:24.828419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.828458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.828642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.828674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.828876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.828909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.829092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.829125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.829312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.829345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.829556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.829590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.829762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.829795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.830017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.830049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.830215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.830257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.830450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.830484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.830689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.830722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.830895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.830928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.831127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.831159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.831308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.831340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.831542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.831575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.831783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.831816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.832018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.832051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.832249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.832282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.832459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.832492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.832689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.832723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.832898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.832931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.833098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.833131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.833347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.833379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.833520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.833553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.833713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.833746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.833906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.833938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.834123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.834156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.834357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.834390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.834570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.834604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.834815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.834854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.835068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.835101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.835308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.835341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.835537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.835570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.835772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.835805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.836006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.836039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.836217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.836250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.836460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.836493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.836674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.353 [2024-07-24 20:24:24.836707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.353 qpair failed and we were unable to recover it. 00:30:21.353 [2024-07-24 20:24:24.836908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.836941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.837140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.837172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.837378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.837410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.837560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.837594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.837768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.837801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.837979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.838011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.838186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.838219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.838405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.838444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.838579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.838612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.838809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.838841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.839020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.839053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.839222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.839255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.839456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.839490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.839700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.839733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.839906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.839939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.840124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.840156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.840364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.840397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.840578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.840611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.840790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.840823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.841025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.841058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.841267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.841300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.841501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.841535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.841736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.841769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.841935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.841968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.842140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.842173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.842343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.842376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.842592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.842625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.842807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.842839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.843037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.843070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.843269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.843302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.843504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.843537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.843749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.843786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.843995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.844028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.844226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.844259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.844445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.844478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.844684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.844716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.844916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.844949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.354 [2024-07-24 20:24:24.845150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.354 [2024-07-24 20:24:24.845182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.354 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.845376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.845409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.845614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.845647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.845843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.845875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.846079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.846111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.846312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.846344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.846545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.846579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.846722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.846754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.846956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.846989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.847170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.847202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.847400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.847441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.847594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.847626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.847792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.847825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.848019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.848052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.848258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.848291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.848462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.848496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.848710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.848743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.848963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.848996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.849177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.849210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.849347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.849379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.849573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.849606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.849780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.849814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.850012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.850044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.850248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.850281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.850483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.850516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.850689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.850722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.850923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.850956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.851155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.851187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.851386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.851419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.851607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.851640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.851842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.851875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.852066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.852099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.852271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.852304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.852477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.852510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.852680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.852713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.852886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.852919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.853089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.853122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.853322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.355 [2024-07-24 20:24:24.853355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.355 qpair failed and we were unable to recover it. 00:30:21.355 [2024-07-24 20:24:24.853508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.853541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.853723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.853755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.853892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.853931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.854130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.854163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.854311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.854343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.854546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.854580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.854760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.854793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.854982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.855015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.855163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.855196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.855404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.855443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.855674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.855706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.855863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.855895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.856057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.856089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.856294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.856327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.856536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.856570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.856796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.856829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.857019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.857054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.857196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.857229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.857403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.857444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.857614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.857647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.857818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.857851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.858043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.858075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.858238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.858270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.858482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.858520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.858723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.858755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.858954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.858986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.859157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.859190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.859362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.859395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.859567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.859600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.859775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.859807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.860008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.860041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.860244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.860277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.860453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.860486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.860665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.860698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.860898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.860931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.861132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.861164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.861338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.861371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-24 20:24:24.861609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-24 20:24:24.861643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.861817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.861850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.861995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.862028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.862201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.862233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.862445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.862478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.862654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.862686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.862885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.862918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.863128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.863161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.863323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.863355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.863533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.863566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.863730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.863763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.863914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.863946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.864114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.864147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.864324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.864357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.864553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.864587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.864792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.864825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.865035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.865068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.865277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.865309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.865457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.865490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.865705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.865738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.865921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.865954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.866148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.866181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.866387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.866420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.866618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.866651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.866808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.866841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.867038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.867070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.867271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.867310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.867503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.867537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.867715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.867748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.867923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.867956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.868163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.868195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.868350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-24 20:24:24.868383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-24 20:24:24.868589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.868622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.868826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.868859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.869057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.869090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.869275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.869308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.869458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.869502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.869704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.869738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.869945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.869978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.870161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.870195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.870355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.870387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.870597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.870631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.870798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.870831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.871007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.871040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.871242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.871275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.871487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.871521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.871684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.871717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.871884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.871917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.872116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.872149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.872367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.872399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.872577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.872610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.872811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.872843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.873019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.873052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.873275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.873308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.873493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.873529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.873699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.873732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.873932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.873965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.874113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.874146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.874341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.874374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.874553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.874586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.874749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.874781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.874981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.875014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.875220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.875252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.875414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.875462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.875673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.875706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.875908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.875941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.876140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.876178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.876365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.876399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.876604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.876637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.876775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-24 20:24:24.876807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-24 20:24:24.876970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.877003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.877185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.877218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.877458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.877492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.877695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.877728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.877911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.877944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.878162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.878196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.878402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.878444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.878621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.878653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.878834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.878866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.879052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.879085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.879225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.879258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.879407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.879447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.879617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.879650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.879840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.879874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.880063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.880102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.880285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.880318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.880463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.880497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.880708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.880741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.880919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.880951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.881094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.881127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.881292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.881325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.881528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.881562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.881747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.881780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.881985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.882018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.882221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.882254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.882454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.882487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.882684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.882717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.882890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.882923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.883121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.883153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.883325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.883358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.883554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.883587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.883756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.883789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.883987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.884019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.884194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.884226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.884424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.884465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.884644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.884677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.884816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.884854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-24 20:24:24.885038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-24 20:24:24.885072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.885238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.885271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.885471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.885505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.885714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.885747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.885907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.885940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.886074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.886113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.886315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.886348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.886515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.886549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.886734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.886767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.886927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.886961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.887144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.887177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.887364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.887397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.887611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.887644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.887858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.887891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.888078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.888112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.888292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.888325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.888519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.888552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.888721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.888754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.888918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.888950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.889133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.889166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.889345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.889378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.889539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.889573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.889745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.889778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.889942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.889974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.890147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.890180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.890355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.890388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.890572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.890606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.890794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.890827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.891012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.891044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.891208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.891242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.891445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.891479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.891655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.891688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.891854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.891887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.892087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.892120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.892256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.892289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.892501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.892535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.892748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.892781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.892983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.893016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-24 20:24:24.893192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-24 20:24:24.893225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.893423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.893467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.893640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.893673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.893819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.893852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.894017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.894051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.894238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.894271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.894419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.894458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.894634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.894667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.894864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.894896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.895074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.895107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.895279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.895312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.895480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.895514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.895681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.895714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.895892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.895925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.896089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.896122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.896303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.896335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.896464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.896498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.896686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.896719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.896886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.896919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.897080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.897113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.897249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.897292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.897458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.897493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.897663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.897696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.897897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.897930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.898098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.898131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.898332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.898364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.898567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.898601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.898775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.898808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.899010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.899043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.899240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.899273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.899449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.899483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.899691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.899724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.899937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.899969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.900167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.900200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.900401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.900441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.900605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.900638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.900816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.900849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.901021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.901054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-24 20:24:24.901255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-24 20:24:24.901288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.901467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.901501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.901662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.901695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.901893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.901931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.902105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.902138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.902348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.902381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.902590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.902624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.902823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.902857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.903050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.903083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.903301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.903333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.903514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.903548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.903755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.903788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.904002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.904035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.904205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.904238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.904385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.904418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.904573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.904606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.904805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.904838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.905010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.905043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.905242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.905275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.905448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.905483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.905615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.905648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.905820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.905853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.906066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.906099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.906261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.906293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.906474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.906507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.906718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.906751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.906928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.906961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.907159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.907192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.907392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.907425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.907600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.907633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.907852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.907885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.908085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.908118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.908293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-24 20:24:24.908325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-24 20:24:24.908522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.908555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.908723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.908755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.908960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.908993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.909160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.909193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.909331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.909363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.909547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.909581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.909786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.909819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.910001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.910033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.910232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.910265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.910411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.910450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.910652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.910691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.910898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.910931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.911135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.911167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.911319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.911352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.911519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.911552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.911729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.911762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.911964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.911997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.912196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.912229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.912413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.912452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.912617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.912651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.912846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.912879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.913086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.913119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.913319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.913353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.913565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.913600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.913817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.913850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.914020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.914053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.914223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.914257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.914435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.914469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.914653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.914685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.914897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.914930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.915101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.915140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.915313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.915345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.915518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.915551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.915757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.915790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.915962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.915995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.916172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.916205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-24 20:24:24.916378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-24 20:24:24.916411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.916569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.916603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.916814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.916846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.917052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.917085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.917251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.917284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.917459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.917493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.917711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.917744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.917946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.917979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.918126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.918159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.918331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.918364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.918539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.918572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.918739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.918772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.918948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.918981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.919200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.919233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.919413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.919457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.919635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.919668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.919868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.919900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.920114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.920146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.920345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.920378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.920532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.920565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.920738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.920771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.920961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.920993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.921141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.921174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.921350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.921383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.921582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.921616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.921789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.921822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.921964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.921997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.922167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.922200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.922436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.922470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.922658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.922691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.922874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.922907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.923086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.923119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.923341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.923374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.923589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.923623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.923803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.923836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.924052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.924085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.924265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.924298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.924497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.924531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-24 20:24:24.924733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-24 20:24:24.924766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.924903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.924936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.925075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.925108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.925255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.925288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.925490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.925523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.925711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.925744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.925911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.925944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.926085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.926118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.926295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.926328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.926511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.926545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.926722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.926754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.926922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.926955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.927152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.927184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.927359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.927392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.927599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.927632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.927843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.927876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.928022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.928060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.928232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.928264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.928412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.928452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.928662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.928695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.928878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.928911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.929094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.929127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.929312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.929345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.929495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.929528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.929698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.929731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.929901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.929934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.930103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.930136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.930274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.930307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.930454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.930487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.930686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.930719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.930897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.930931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.931097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.931130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.931329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.931362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.931565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.931599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.931800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.931832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.932005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-24 20:24:24.932038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-24 20:24:24.932206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.932239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.932451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.932485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.932703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.932736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.932904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.932937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.933108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.933141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.933315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.933348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.933532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.933566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.933755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.933788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.933957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.933990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.934158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.934192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.934361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.934394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.934584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.934618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.934806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.934839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.935029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.935062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.935233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.935267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.935444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.935477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.935644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.935677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.935876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.935909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.936132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.936165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.936334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.936367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.936567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.936606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.936791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.936824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.937033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.937066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.937209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.937242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.937415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.937471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.937641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.937675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.937847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.937880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-24 20:24:24.938072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-24 20:24:24.938104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.938301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.938333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.938500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.938534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.938711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.938743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.938945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.938978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.939152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.939185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.939394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.939426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.939618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.939652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.939825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.939857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.940042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.940075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.940282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.940315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.940462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.940495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.940695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.940728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.940906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.940939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.941087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.941119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.941285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.941318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.941520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.941554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.941746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.941779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.941986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.942018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.942191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.942224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.942440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.942474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.942648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.942681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.942879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.942912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.943082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.943115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.943300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.943332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.943548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.943582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.943792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.943825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.943969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.944002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.944217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.944250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.944449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.944482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.944694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.944727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.944935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.944968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.945170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.945203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.945347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.945385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.945598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.945631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.945818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.945851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.946049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.946082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-24 20:24:24.946227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-24 20:24:24.946260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.946455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.946489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.946679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.946711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.946880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.946913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.947098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.947131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.947273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.947305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.947508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.947542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.947740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.947773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.947957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.947989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.948174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.948207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.948380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.948417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.948597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.948630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.948805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.948838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.949026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.949059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.949222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.949255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.949433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.949466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.949643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.949676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.949850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.949883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.950084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.950117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.950318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.950351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.950519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.950552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.950726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.950760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.950950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.950983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.951204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.951238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.951414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.951456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.951607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.951640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.951826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.951859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.952036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.952069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.952258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.952291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.952497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.952532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.952703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.952736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.952882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.952915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.953086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.953119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.953313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.953346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.953533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.953567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.953725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.953758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.953942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.953980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.954185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-24 20:24:24.954218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-24 20:24:24.954388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.954421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.954581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.954614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.954786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.954820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.954991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.955024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.955201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.955234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.955419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.955459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.955601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.955634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.955804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.955837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.956059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.956092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.956272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.956304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.956482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.956516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.956698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.956731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.956948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.956981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.957191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.957224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.957419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.957459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.957616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.957649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.957852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.957884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.958056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.958090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.958280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.958313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.958519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.958553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.958722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.958754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.958967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.959000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.959161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.959194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.959335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.959367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.959545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.959578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.959765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.959798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.959968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.960001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.960151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.960184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.960336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.960369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.960544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.960577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.960753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.960786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.960961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.960994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.961192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.961226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.961392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.961425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.961623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.961656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.961801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.961833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.962030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.962063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.962242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-24 20:24:24.962275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-24 20:24:24.962444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.962478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.962684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.962716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.962895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.962928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.963101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.963134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.963279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.963312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.963493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.963526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.963698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.963731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.963896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.963929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.964091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.964124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.964250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.964283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.964456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.964489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.964689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.964722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.964941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.964973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.965180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.965213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.965419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.965458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.965585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.965618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.965793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.965826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.965998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.966030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.966205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.966237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.966368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.966401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.966582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.966615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.966815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.966848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.967048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.967082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.967274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.967306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.967519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.967554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.967750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.967783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.967967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.968000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.968207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.968246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.968388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.968421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.968615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.968648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.968791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.968824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.969033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.969065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.969276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.969309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.969485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.969518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.969700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.969733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.969930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.969963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.970138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.970171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.970337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-24 20:24:24.970371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-24 20:24:24.970521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.970555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.970721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.970754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.970926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.970959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.971173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.971206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.971420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.971471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.971645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.971677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.971816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.971849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.972016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.972049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.972218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.972250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.972453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.972487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.972683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.972715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.972866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.972899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.973057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.973089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.973256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.973289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.973455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.973488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.973656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.973689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.973906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.973939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.974112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.974144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.974321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.974353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.974537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.974570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.974786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.974819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.975005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.975038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.975188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.975221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.975419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.975460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.975652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.975685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.975832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.975864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.976046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.976078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.976281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.976313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.976512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.976545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.976687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.976725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.976893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.976926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.977133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-24 20:24:24.977165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-24 20:24:24.977344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.977377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.977557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.977590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.977733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.977765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.977913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.977946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.978151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.978183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.978347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.978380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.978556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.978589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.978773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.978806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.978971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.979003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.979205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.979239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.979388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.979421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.979616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.979649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.979826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.979865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.980041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.980074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.980273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.980306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.980519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.980554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.980745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.980778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.980961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.980994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.981213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.981246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.981413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.981453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.981597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.981630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.981829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.981862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.982038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.982071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.982234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.982267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.982421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.982473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.982673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.982706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.982883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.982916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.983093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.983126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.983303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.983336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.983537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.983571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.983781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.983813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.984013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.984046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.984237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.984270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.984450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.984484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.984643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.984676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.984861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.984894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.985092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.985125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.985319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-24 20:24:24.985357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-24 20:24:24.985547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.985580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.985784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.985817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.985994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.986027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.986243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.986275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.986425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.986466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.986674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.986708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.986886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.986919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.987078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.987111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.987285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.987318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.987519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.987552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.987755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.987788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.987967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.988000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.988174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.988207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.988395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.988438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.988607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.988639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.988853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.988886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.989095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.989128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.989329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.989362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.989532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.989567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.989741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.989774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.989951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.989983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.990178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.990211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.990344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.990377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.990564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.990598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.990773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.990805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.990935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.990967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.991175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.991207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.991376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.991409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.991585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.991618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.991820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.991853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.992009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.992042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.992235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.992268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.992463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.992496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.992666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.992699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.992897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.992930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.993076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.993109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.993287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.993320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.993530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-24 20:24:24.993564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-24 20:24:24.993746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.993779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.993990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.994028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.994202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.994234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.994437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.994471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.994671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.994704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.994878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.994911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.995088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.995122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.995297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.995330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.995560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.995594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.995785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.995819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.995994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.996028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.996174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.996208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.996408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.996450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.996620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.996654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.996840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.996873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.997053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.997087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.997249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.997281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.997482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.997516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.997683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.997717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.997891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.997924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.998126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.998159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.998361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.998395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.998550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.998584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.998755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.998788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.998961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.998994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.999140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.999173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.999344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.999377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.999576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.999610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:24.999801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:24.999834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:25.000032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:25.000065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:25.000262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:25.000296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:25.000479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:25.000513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:25.000683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:25.000716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:25.000919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:25.000952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:25.001101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:25.001134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:25.001306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:25.001339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:25.001539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:25.001572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-24 20:24:25.001726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-24 20:24:25.001759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.001964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.001997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.002160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.002193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.002352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.002385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.002596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.002637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.002808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.002841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.003042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.003074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.003252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.003285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.003434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.003467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.003663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.003696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.003870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.003902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.004074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.004107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.004250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.004282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.004481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.004514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.004661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.004694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.004878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.004911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.005075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.005107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.005278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.005311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.005494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.005529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.005704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.005737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.005939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.005973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.006152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.006186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.006385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.006418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.006611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.006645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.006787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.006821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.007006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.007039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.007219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.007252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.007449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.007482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.007680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.007713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.007904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.007937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.008155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.008188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.008389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.008422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.008591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.008625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.008798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.008831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.008998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.009031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.009178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.009211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.009413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.009467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.009641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.009675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-24 20:24:25.009818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-24 20:24:25.009851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.010048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.010081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.010280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.010314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.010598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.010631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.010803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.010836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.011013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.011046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.011219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.011258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.011437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.011471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.011646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.011679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.011889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.011922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.012058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.012093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.012235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.012268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.012466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.012500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.012631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.012664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.012838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.012872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.013041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.013074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.013275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.013309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.013486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.013520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.013691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.013724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.013903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.013937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.014098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.014131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.014308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.014341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.014521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.014556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.014726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.014760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.014930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.014962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.015134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.015167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.015382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.015415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.015588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.015624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.015825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.015858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.016030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.016062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.016237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.016270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.016480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.016513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-24 20:24:25.016694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-24 20:24:25.016727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.016894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.016927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.017111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.017144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.017305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.017338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.017538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.017571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.017747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.017780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.017950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.017983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.018154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.018187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.018371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.018404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.018567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.018601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.018787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.018820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.018964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.018997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.019205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.019238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.019411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.019453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.019631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.019670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.019841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.019873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.020075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.020108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.020298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.020331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.020536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.020570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.020744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.020778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.020957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.020990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.021182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.021216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.021389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.021422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.021618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.021651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.021795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.021828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.022029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.022063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.022267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.022300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.022471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.022504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.022708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.022741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.022917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.022950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.023120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.023154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.023324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.023358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.023556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.023592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.023797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.023830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.024028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.024061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.024270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.024305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.024464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.024498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.024672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.024705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-24 20:24:25.024903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-24 20:24:25.024936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.025085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.025118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.025286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.025319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.025493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.025527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.025664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.025698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.025829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.025862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.026036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.026069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.026283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.026316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.026484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.026518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.026702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.026735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.026905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.026938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.027107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.027141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.027315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.027348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.027498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.027531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.027701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.027734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.027872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.027905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.028076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.028114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.028325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.028358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.028570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.028605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.028751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.028784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.028995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.029028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.029237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.029270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.029423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.029462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.029664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.029697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.029866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.029900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.030097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.030130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.030341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.030375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.030573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.030608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.030810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.030844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.031021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.031054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.031256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.031289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.031493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.031526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.031675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.031708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.031875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.031908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.032096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.032130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.032333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.032365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.032567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.032601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.032804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-24 20:24:25.032837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-24 20:24:25.033037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.033070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.033270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.033303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.033460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.033494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.033631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.033664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.033837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.033871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.034040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.034073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.034245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.034279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.034475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.034510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.034685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.034719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.034881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.034915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.035084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.035117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.035279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.035313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.035511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.035545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.035748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.035782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.035951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.035984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.036183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.036217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.036399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.036447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.036657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.036690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.036871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.036910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.037043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.037076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.037238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.037270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.037471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.037506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.037716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.037749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.037930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.037964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.038122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.038163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.038341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.038374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.038562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.038595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.038747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.038791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.038997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.039030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.039237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.039270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.039447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.039481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.039682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.039715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.039936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.039969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.040137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.040179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.040328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.040362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.040557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.040591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.040795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.040828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.041038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-24 20:24:25.041072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-24 20:24:25.041228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.041261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.041480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.041514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.041668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.041702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.041874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.041907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.042107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.042140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.042340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.042373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.042579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.042613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.042763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.042797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.042969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.043004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.043179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.043212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.043388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.043422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.043619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.043652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.043853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.043886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.044060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.044093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.044281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.044314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.044513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.044546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.044721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.044754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.044922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.044955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.045118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.045152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.045355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.045388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.045591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.045633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.045832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.045865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.046065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.046098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.046254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.046287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.046463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.046497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.046644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.046678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.046879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.046912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.047118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.047152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.047327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.047361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.047560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.047594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.047755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.047788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.047954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.047987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-24 20:24:25.048160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-24 20:24:25.048194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.048366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.048398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.048651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.048685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.048831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.048864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.049033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.049076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.049288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.049321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.049508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.049543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.049741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.049774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.049950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.049983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.050157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.050190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.050363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.050396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.050589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.050623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.050821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.050854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.051028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.051061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.051248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.051282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.051455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.051489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.051636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.051669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.051836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.051869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.052044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.052078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.052280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.052314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.052484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.052519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.052666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.052700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.052901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.052934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.053113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.053146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.053317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.053350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.053497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.053532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.053712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.053746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.053936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.053969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.054147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.054185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.054361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.054394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.054590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.054623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.054801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.054834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.055043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.055077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.055237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.055270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.055404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.055444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.055628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.055661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.055873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.055907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.056081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.056114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-24 20:24:25.056324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-24 20:24:25.056357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.056527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.056561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.056773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.056806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.056954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.056987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.057153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.057187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.057385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.057418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.057600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.057635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.057804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.057837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.058038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.058071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.058268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.058302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.058444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.058478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.058659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.058693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.058852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.058885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.059082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.059115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.059327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.059360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.059537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.059572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.059788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.059822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.060009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.060043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.060247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.060280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.060493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.060527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.060728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.060761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.060945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.060978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.061182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.061215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.061356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.061390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.061534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.061568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.061752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.061785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.061967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.062000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.062174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.062207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.062393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.062426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.062609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.062642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.062811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.062849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.063025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.063058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.063258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.063291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.063442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.063475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.063670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.063704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.063838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.063871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.064014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.064047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.064199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.064232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.382 qpair failed and we were unable to recover it. 00:30:21.382 [2024-07-24 20:24:25.064460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.382 [2024-07-24 20:24:25.064494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.064655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.064689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.064851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.064895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.065090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.065123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.065335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.065369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.065600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.065634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.065823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.065856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.066026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.066060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.066244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.066278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.066443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.066487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.066661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.066693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.066868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.066901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.067074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.067107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.067306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.067339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.067543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.067576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.067744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.067778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.067964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.067998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.068186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.068219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.068398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.068441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.068582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.068616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.068789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.068822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.068982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.069015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.069177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.069212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.069410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.069450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.069602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.069635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.069786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.069819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.069995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.070028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.070240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.070274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.070483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.070518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.070711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.070745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.070911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.070945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.071132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.071165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.071318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.071351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.071570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.071605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.071812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.071845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.072043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.383 [2024-07-24 20:24:25.072076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.383 qpair failed and we were unable to recover it. 00:30:21.383 [2024-07-24 20:24:25.072282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.072315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.072460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.072494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.072669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.072702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.072845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.072879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.073053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.073086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.073269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.073302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.073509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.073544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.073721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.073755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.073931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.073964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.074110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.074142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.074301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.074334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.074509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.074543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.074724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.074758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.074927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.074960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.075161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.075195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.075396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.075434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.075615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.075649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.075850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.075883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.076061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.076094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.076292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.076325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.076502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.076537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.076744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.076778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.076953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.076986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.077157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.077196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.077347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.077380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.077588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.077622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.077805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.077839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.078007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.078041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.078214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.078247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.078379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.078413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.078605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.078639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.078827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.078869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.079042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.079075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.079248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.079281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.079452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.079487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.079658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.079691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.079891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.079934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.080121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.384 [2024-07-24 20:24:25.080155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.384 qpair failed and we were unable to recover it. 00:30:21.384 [2024-07-24 20:24:25.080373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.080406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.080656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.080689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.080839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.080872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.081067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.081100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.081270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.081303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.081484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.081519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.081692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.081725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.081899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.081932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.082071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.082105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.082269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.082302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.082464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.082498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.082687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.082722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.082900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.082934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.083112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.083146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.083322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.083355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.083559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.083593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.083801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.083835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.084034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.084067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.084267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.084301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.084486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.084520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.084694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.084728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.084902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.084936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.085103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.085137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.085344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.085377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.085595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.085629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.085827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.085866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.086008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.086042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.086243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.086276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.086409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.086449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.086640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.086673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.086849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.086882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.087066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.385 [2024-07-24 20:24:25.087100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.385 qpair failed and we were unable to recover it. 00:30:21.385 [2024-07-24 20:24:25.087272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.087305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.087480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.087524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.087662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.087696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.087832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.087876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.088042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.088076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.088276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.088310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.088521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.088555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.088786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.088820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.088984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.089028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.089199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.089233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.089396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.089436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.089651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.089684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.089844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.089878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.090079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.090113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.090258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.090291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.090439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.090473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.090644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.090677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.090845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.090889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.091054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.091087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.091247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.091290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.091498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.091532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.091707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.091740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.091914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.091947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.092147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.092180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.092379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.092413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.092604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.092637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.092808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.092841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.093039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.093072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.093257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.093289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.093490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.093524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.093713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.093745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.093916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.093950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.094121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.094154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.094327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.094365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.094521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.094555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.094730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.094763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.094938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.094970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.386 [2024-07-24 20:24:25.095142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.386 [2024-07-24 20:24:25.095175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.386 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.095322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.095355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.095563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.095597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.095797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.095831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.095998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.096032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.096174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.096207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.096380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.096413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.096624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.096657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.096855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.096888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.097090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.097123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.097336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.097369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.097507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.097541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.097731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.097764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.097976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.098009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.098179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.098212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.098399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.098438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.098615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.098648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.098843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.098876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.099085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.099118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.099290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.099323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.099496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.099531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.099697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.099730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.099905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.099938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.100145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.100178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.100349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.100382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.100591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.100625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.100811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.100844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.100993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.101026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.101166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.101198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.101398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.101437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.101606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.101639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.101837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.101870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.102058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.102091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.102291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.102324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.102505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.102540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.102729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.102762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.102932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.102969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.103145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.103178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.387 [2024-07-24 20:24:25.103338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.387 [2024-07-24 20:24:25.103372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.387 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.103554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.103588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.103763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.103796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.103970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.104003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.104138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.104171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.104379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.104412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.104603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.104647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.104820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.104854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.105057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.105090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.105232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.105266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.105446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.105480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.105667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.105700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.105909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.105943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.106116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.106150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.106359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.106392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.106583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.106617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.106781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.106814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.106983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.107017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.107202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.107236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.107390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.107423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.107603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.107636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.107835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.107869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.108068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.108102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.108247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.108280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.108461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.108496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.108708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.108742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.108906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.108940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.109058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.109092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.109295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.109327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.109555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.109589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.109720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.109754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.109905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.109938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.110094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.110127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.110337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.110370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.110576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.110610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.110796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.110829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.111028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.111062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.111233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.111267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.388 [2024-07-24 20:24:25.111468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.388 [2024-07-24 20:24:25.111507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.388 qpair failed and we were unable to recover it. 00:30:21.389 [2024-07-24 20:24:25.111694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.389 [2024-07-24 20:24:25.111727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.389 qpair failed and we were unable to recover it. 00:30:21.389 [2024-07-24 20:24:25.111901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.389 [2024-07-24 20:24:25.111934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.389 qpair failed and we were unable to recover it. 00:30:21.389 [2024-07-24 20:24:25.112125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.389 [2024-07-24 20:24:25.112158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.389 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.112333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.112367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.112541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.112575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.112749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.112782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.112980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.113013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.113180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.113213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.113396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.113436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.113635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.113669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.113854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.113887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.114073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.114107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.114323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.114356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.114511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.114545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.114727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.114760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.114969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.115003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.115155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.115188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.115371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.115404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.115609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.115643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.115854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.115887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.116045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.116078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.116287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.116319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.116524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.116559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.116714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.116747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.116930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.116963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.117127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.117171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.117347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.117381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.117571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.117606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.117779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.117812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.117994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.118027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.661 qpair failed and we were unable to recover it. 00:30:21.661 [2024-07-24 20:24:25.118238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.661 [2024-07-24 20:24:25.118271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.118446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.118480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.118680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.118713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.118881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.118915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.119095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.119128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.119338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.119371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.119550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.119584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.119769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.119802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.119934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.119976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.120186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.120225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.120386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.120436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.120588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.120622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.120793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.120826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.120988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.121022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.121194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.121227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.121440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.121473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.121682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.121716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.121887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.121921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.122090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.122133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.122307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.122340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.122547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.122581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.122784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.122818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.122986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.123019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.123198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.123231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.123403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.123443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.123657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.123691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.123826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.123859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.124055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.124088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.124275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.124309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.124479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.124513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.124646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.124678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.124887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.124921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.125074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.125107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.125273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.125307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.125483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.125518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.125687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.125720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.125897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.125931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.662 [2024-07-24 20:24:25.126103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.662 [2024-07-24 20:24:25.126136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.662 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.126280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.126313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.126499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.126543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.126724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.126757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.126929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.126968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.127166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.127200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.127403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.127442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.127621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.127656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.127824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.127857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.128031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.128064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.128228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.128261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.128469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.128503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.128723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.128761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.128935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.128968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.129142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.129174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.129342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.129375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.129550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.129585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.129784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.129817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.130037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.130069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.130208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.130241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.130444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.130477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.130657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.130690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.130871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.130904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.131077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.131110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.131280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.131313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.131515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.131548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.131758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.131792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.131987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.132020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.132219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.132252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.132459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.132494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.132672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.132705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.132888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.132921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.133065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.133097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.133232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.133265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.133396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.133436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.133626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.133660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.133819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.663 [2024-07-24 20:24:25.133854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.663 qpair failed and we were unable to recover it. 00:30:21.663 [2024-07-24 20:24:25.134027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.134060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.134205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.134237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.134404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.134445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.134610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.134650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.134789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.134822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.135002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.135034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.135231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.135264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.135445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.135478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.135691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.135724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.135843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.135877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.136042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.136075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.136242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.136275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.136460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.136494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.136671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.136704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.136859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.136892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.137070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.137109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.137322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.137355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.137540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.137574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.137752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.137785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.137953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.137986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.138166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.138199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.138363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.138395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.138584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.138618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.138818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.138852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.139062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.139095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.139275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.139308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.139503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.139537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.139882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.139915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.140109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.140143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.140343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.140377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.140549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.140584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.140756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.140789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.140961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.140994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.141204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.141236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.141426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.141467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.141654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.141687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.141861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.141893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.664 [2024-07-24 20:24:25.142067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.664 [2024-07-24 20:24:25.142100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.664 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.142297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.142329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.142500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.142534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.142707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.142740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.142947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.142980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.143150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.143184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.143333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.143365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.143554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.143588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.143796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.143829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.144025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.144058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.144230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.144263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.144457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.144491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.144691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.144724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.144891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.144924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.145126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.145159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.145332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.145366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.145569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.145603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.145769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.145802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.145977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.146015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.146190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.146222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.146439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.146473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.146648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.146691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.146870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.146904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.147078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.147112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.147297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.147329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.147545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.147579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.147794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.147828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.148004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.148037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.148183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.148215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.148375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.148408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.148595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.665 [2024-07-24 20:24:25.148629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.665 qpair failed and we were unable to recover it. 00:30:21.665 [2024-07-24 20:24:25.148801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.148834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.149038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.149072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.149247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.149280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.149412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.149453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.149625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.149658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.149840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.149873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.150049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.150082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.150281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.150314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.150502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.150536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.150755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.150788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.150948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.150981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.151193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.151226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.151371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.151405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.151592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.151626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.151815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.151848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.152041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.152074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.152281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.152314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.152454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.152488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.152662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.152696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.152897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.152930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.153079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.153113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.153292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.153325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.153511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.153555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.153769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.153802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.154013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.154046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.154222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.154255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.154449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.154483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.154684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.154723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.154906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.154938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.155102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.155135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.155333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.155366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.155512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.666 [2024-07-24 20:24:25.155546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.666 qpair failed and we were unable to recover it. 00:30:21.666 [2024-07-24 20:24:25.155689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.155722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.155890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.155923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.156112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.156146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.156315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.156350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.156504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.156537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.156735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.156768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.156919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.156952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.157157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.157190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.157359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.157393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.157580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.157615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.157795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.157828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.158030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.158063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.158267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.158300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.158475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.158509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.158709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.158743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.158914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.158947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.159147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.159179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.159353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.159386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.159556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.159590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.159715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.159749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.159956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.159989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.160164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.160197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.160408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.160450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.160604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.160638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.160814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.160847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.160978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.161011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.161182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.161215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.161420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.161461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.161664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.161697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.161871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.161904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.162051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.162084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.162290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.162323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.162496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.162531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.162666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.162699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.162863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.162896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.163066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.163105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.163265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.667 [2024-07-24 20:24:25.163298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.667 qpair failed and we were unable to recover it. 00:30:21.667 [2024-07-24 20:24:25.163466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.163500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.163669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.163714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.163887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.163922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.164094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.164128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.164307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.164340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.164497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.164531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.164701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.164735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.164903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.164936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.165143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.165176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.165324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.165356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.165510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.165545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.165734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.165768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.165958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.165991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.166146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.166179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.166341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.166375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.166569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.166602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.166797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.166831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.167005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.167039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.167211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.167244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.167418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.167469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.167640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.167673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.167840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.167873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.168042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.168075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.168270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.168304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.168486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.168521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.168693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.168727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.168910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.168943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.169077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.169110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.169274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.169307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.169444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.169477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.169636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.169670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.169845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.169878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.170025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.170058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.170264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.170297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.170492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.170526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.170699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.170732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.170938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.170971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.668 [2024-07-24 20:24:25.171187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.668 [2024-07-24 20:24:25.171221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.668 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.171422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.171467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.171656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.171689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.171853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.171886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.172040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.172074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.172229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.172263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.172403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.172443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.172603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.172636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.172775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.172809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.172952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.172985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.173157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.173190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.173348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.173381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.173530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.173564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.173750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.173784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.173958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.173992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.174214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.174247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.174406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.174446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.174599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.174632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.174832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.174865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.175119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.175151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.175322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.175355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.175533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.175568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.175743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.175777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.175963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.175996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.176167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.176200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.176397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.176444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.176595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.176629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.176767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.176801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.176954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.176988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.177165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.177197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.177398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.177543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 [2024-07-24 20:24:25.177780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.177834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe94c000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.178023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.178059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.178251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.178284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.178470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.178504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.178648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.178681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.178828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.178861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.179028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.179062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.669 [2024-07-24 20:24:25.179230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.669 [2024-07-24 20:24:25.179263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.669 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.179440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.179474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.179650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.179683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.179850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.179883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.180070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.180104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.180272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.180306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.180449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.180484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.180651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.180685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.180843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.180877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.181080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.181113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.181318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.181352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.181528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.181563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.181734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.181767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.181967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.182001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.182145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.182178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.182354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.182388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.182592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.182626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.182839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.182872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.183013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.183047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.183249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.183283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.183488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.183522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.183678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.183711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.183892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.183926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.184104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.184137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.184325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.184359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.184532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.184567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.184720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.184753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.184950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.184983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.185157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.185191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.185373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.185406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.185571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.185610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.670 [2024-07-24 20:24:25.185751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.670 [2024-07-24 20:24:25.185784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.670 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.185976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.186009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.186152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.186186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.186389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.186421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.186587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.186620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.186823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.186857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.187028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.187061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.187236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.187269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.187463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.187498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.187650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.187683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.187860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.187894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.188093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.188127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.188294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.188328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.188521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.188556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.188757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.188791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.188974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.189008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.189141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.189177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.189386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.189420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.189620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.189654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.189823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.189856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.190031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.190064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.190239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.190272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.190417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.190458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.190608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.190641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.190844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.190877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.191051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.191084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.191295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.191329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.191504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.191539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.191748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.191780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.191951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.191984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.192157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.192191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.192360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.192393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.192592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.192626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.192814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.192848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.193027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.193059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.193232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.193266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.193482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.193517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.671 [2024-07-24 20:24:25.193697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.671 [2024-07-24 20:24:25.193731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.671 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.193903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.193936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.194114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.194154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.194336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.194369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.194549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.194583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.194800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.194834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.195037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.195071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.195253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.195286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.195467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.195501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.195667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.195700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.195895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.195929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.196075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.196108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.196313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.196346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.196475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.196509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.196687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.196721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.196884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.196917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.197109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.197142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.197305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.197338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.197479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.197513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.197714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.197747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.197892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.197925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.198068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.198103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.198308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.198341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.198525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.198558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.198767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.198800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.198960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.198993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.199206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.199240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.199374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.199407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.199587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.199620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.199832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.199865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.200057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.200090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.200286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.200319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.200520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.200554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.200711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.200744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.200864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.200897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.201117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.201150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.201334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.201367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.201552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.201585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.672 [2024-07-24 20:24:25.201746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.672 [2024-07-24 20:24:25.201779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.672 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.201950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.201983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.202157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.202190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.202385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.202418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.202607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.202646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.202850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.202883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.203080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.203113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.203283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.203316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.203490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.203524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.203726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.203759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.203929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.203961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.204136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.204168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.204368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.204402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.204583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.204616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.204817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.204850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.205024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.205058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.205231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.205263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.205482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.205517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.205661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.205694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.205871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.205904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.206118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.206150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.206357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.206390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.206558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.206592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.206726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.206759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.206932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.206964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.207161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.207194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.207401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.207441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.207645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.207679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.207858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.207890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.208090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.208123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.208319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.208351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.208544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.208577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.208765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.208797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.208944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.208976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.209145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.209178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.209350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.209382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.209561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.209594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.673 [2024-07-24 20:24:25.209793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.673 [2024-07-24 20:24:25.209826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.673 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.209973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.210006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.210202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.210235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.210380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.210413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.210589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.210622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.210820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.210853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.211051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.211084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.211284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.211325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.211512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.211546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.211770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.211803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.211973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.212006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.212187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.212220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.212419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.212458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.212609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.212642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.212817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.212850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.213044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.213077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.213279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.213311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.213513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.213546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.213747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.213780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.213965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.213998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.214203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.214236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.214454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.214487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.214699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.214732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.214900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.214942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.215128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.215160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.215363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.215396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.215584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.215617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.215817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.215849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.215984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.216026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.216234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.216266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.216438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.216471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.216640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.216673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.216874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.216906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.217107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.217139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.217345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.217378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.217612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.217646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.217856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.217889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.218091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.218123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.674 qpair failed and we were unable to recover it. 00:30:21.674 [2024-07-24 20:24:25.218294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.674 [2024-07-24 20:24:25.218326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 [2024-07-24 20:24:25.218501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 [2024-07-24 20:24:25.218534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 [2024-07-24 20:24:25.218731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 [2024-07-24 20:24:25.218764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 [2024-07-24 20:24:25.218978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 [2024-07-24 20:24:25.219011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 [2024-07-24 20:24:25.219238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 [2024-07-24 20:24:25.219271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 [2024-07-24 20:24:25.219425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 [2024-07-24 20:24:25.219464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 [2024-07-24 20:24:25.219666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 [2024-07-24 20:24:25.219699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 [2024-07-24 20:24:25.219869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 [2024-07-24 20:24:25.219902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 [2024-07-24 20:24:25.220069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 [2024-07-24 20:24:25.220102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 [2024-07-24 20:24:25.220303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 [2024-07-24 20:24:25.220341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 [2024-07-24 20:24:25.220526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 [2024-07-24 20:24:25.220560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 [2024-07-24 20:24:25.220730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 [2024-07-24 20:24:25.220763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 [2024-07-24 20:24:25.220939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 [2024-07-24 20:24:25.220971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 [2024-07-24 20:24:25.221135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 [2024-07-24 20:24:25.221168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 [2024-07-24 20:24:25.221338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 [2024-07-24 20:24:25.221371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 [2024-07-24 20:24:25.221550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 [2024-07-24 20:24:25.221583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 [2024-07-24 20:24:25.221792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 [2024-07-24 20:24:25.221825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 [2024-07-24 20:24:25.221990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 [2024-07-24 20:24:25.222032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 [2024-07-24 20:24:25.222231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:21.675 [2024-07-24 20:24:25.222264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:30:21.675 [2024-07-24 20:24:25.222449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 [2024-07-24 20:24:25.222484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 [2024-07-24 20:24:25.222629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:21.675 [2024-07-24 20:24:25.222662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:21.675 [2024-07-24 20:24:25.222872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 [2024-07-24 20:24:25.222905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:21.675 [2024-07-24 20:24:25.223111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 [2024-07-24 20:24:25.223145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 [2024-07-24 20:24:25.223356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 [2024-07-24 20:24:25.223391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 [2024-07-24 20:24:25.223603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 [2024-07-24 20:24:25.223637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 [2024-07-24 20:24:25.223802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 [2024-07-24 20:24:25.223835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.675 [2024-07-24 20:24:25.224032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.675 [2024-07-24 20:24:25.224065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.675 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.224228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.224261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.224474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.224508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.224642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.224675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.224857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.224890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.225062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.225096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.225297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.225330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.225540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.225574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.225753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.225786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.225951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.225985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.226159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.226192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.226366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.226399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.226581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.226615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.226835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.226868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.227069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.227102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.227273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.227306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.227458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.227499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.227706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.227740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.227927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.227959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.228124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.228158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.228320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.228359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.228543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.228582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.228739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.228773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.228925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.228958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.229119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.229152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.229329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.229362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.229531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.229565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.229724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.229757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.229957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.229991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.230159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.230192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.230335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.230368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.230518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.230552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.230681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.230714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.230918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.230952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.231101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.231135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.231306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.231338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.231518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.231553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.231688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.231720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.676 qpair failed and we were unable to recover it. 00:30:21.676 [2024-07-24 20:24:25.231924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.676 [2024-07-24 20:24:25.231957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.232140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.232173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.232298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.232331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.232514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.232547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.232697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.232730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.232939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.232971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.233146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.233178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.233321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.233354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.233506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.233541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.233674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.233714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.233931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.233964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.234138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.234178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.234343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.234376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.234565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.234599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.234805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.234838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.235037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.235070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.235236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.235270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.235449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.235490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.235630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.235663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.235858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.235891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.236031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.236063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.236218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.236251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.236449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.236482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.236623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.236662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.236829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.236862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.237021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.237055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.237229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.237262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.237440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.237482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.237622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.237656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.237867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.237900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.238064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.238099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.238238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.238272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.238479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.238513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.238678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.238711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.238871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.677 [2024-07-24 20:24:25.238905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.677 qpair failed and we were unable to recover it. 00:30:21.677 [2024-07-24 20:24:25.239113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.239146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.239349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.239382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.239552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.239586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.239739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.239772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.239915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.239949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.240155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.240195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.240397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.240440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.240585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.240619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.240798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.240832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.240990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.241023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.241164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.241197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.241372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.241404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.241573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.241608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.241813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.241847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.242021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.242054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.242259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.242293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.242494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.242527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.242677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.242710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.242853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.242886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.243091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.243124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.243297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.243329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.243520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.243554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.243707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.243740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.243881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.243914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.244088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.244120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.244282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.244315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.244505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.244538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:21.678 [2024-07-24 20:24:25.244721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.244754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.678 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:21.678 [2024-07-24 20:24:25.244963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.244997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.245198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.245231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.245446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.245488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.245654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.245693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.245892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.245924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.246091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.246124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.246328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.246361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.246523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.678 [2024-07-24 20:24:25.246558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.678 qpair failed and we were unable to recover it. 00:30:21.678 [2024-07-24 20:24:25.246711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.246743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.246919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.246951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.247149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.247182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.247357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.247390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.247581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.247614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.247821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.247854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.248018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.248051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.248222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.248255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.248460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.248497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.248633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.248665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.248820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.248854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.249027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.249059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.249258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.249291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.249492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.249526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.249707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.249739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.249914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.249947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.250128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.250161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.250340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.250373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.250556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.250590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.250773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.250806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.251006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.251038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.251237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.251270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.251477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.251515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.251689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.251722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.251886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.251928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.252123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.252156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.252362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.252395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.252584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.252617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.252795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.252828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.253001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.679 [2024-07-24 20:24:25.253034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.679 qpair failed and we were unable to recover it. 00:30:21.679 [2024-07-24 20:24:25.253202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.253241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.253419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.253468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.253604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.253637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.253772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.253804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.254003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.254036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.254222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.254255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.254436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.254469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.254633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.254666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.254842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.254875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.255049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.255081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.255250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.255290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.255469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.255504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.255645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.255678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.255851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.255884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.256066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.256099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.256300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.256333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.256506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.256539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.256691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.256724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.256894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.256926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.257086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.257119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.257318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.257350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.257518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.257552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.257757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.257790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.257971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.258004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.258211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.258244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.258420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.258469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.258641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.258674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.258884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.258918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.259065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.259099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.259301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.259334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.259534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.259568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.680 qpair failed and we were unable to recover it. 00:30:21.680 [2024-07-24 20:24:25.259767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.680 [2024-07-24 20:24:25.259799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.259981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.260014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.260175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.260208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.260395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.260432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.260600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.260635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.260836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.260869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.261085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.261118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.261289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.261329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.261512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.261546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.261718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.261757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.261905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.261938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.262105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.262138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.262339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.262372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.262563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.262597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.262815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.262848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.263024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.263057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.263233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.263266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.263480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.263514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.263672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.263704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.263907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.263939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.264079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.264112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.264309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.264341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.264553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.264587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.264744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.264777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.264995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.265028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.265224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.265256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.265436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.265470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.265651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.265683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.265853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.265886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.266092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.266125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.266264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.266296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.266462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.266507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.266718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.266751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.266954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.266987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.267192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.267225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.267409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.267449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.267629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.267662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.267831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.267864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.681 qpair failed and we were unable to recover it. 00:30:21.681 [2024-07-24 20:24:25.268039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.681 [2024-07-24 20:24:25.268072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.268230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.268273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.268455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.268489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.268655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.268694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.268908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.268941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.269126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.269167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.269370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.269403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.269562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.269596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.269815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.269848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.270028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.270061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.270235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.270268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.270460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.270499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.270676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.270709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.270908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.270941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.271155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.271188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.271375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.271414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.271623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.271656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.271858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.271890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.272067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.272100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.272275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.272307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.272521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.272554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 Malloc0 00:30:21.682 [2024-07-24 20:24:25.272724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.272769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.272913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.272946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.682 [2024-07-24 20:24:25.273148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:21.682 [2024-07-24 20:24:25.273182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.682 [2024-07-24 20:24:25.273395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.273435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:21.682 [2024-07-24 20:24:25.273644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.273677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.273879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.273912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.274101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.274133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.274344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.274377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.274558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.274592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.274763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.274795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.275003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.275036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.682 [2024-07-24 20:24:25.275217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.682 [2024-07-24 20:24:25.275249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.682 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.275426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.275467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.275676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.275709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.275883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.275916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.276092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.276130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.276330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.276363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.276389] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:21.683 [2024-07-24 20:24:25.276569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.276602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.276786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.276817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.276976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.277008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.277212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.277245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.277425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.277473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.277649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.277682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.277853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.277886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.278055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.278087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.278299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.278331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.278520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.278553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.278727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.278761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.278939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.278977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.279147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.279179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.279314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.279347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.279525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.279558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.279762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.279794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.279944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.279977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.280185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.280221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.280393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.280426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.280589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.280623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.280799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.280831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.281007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.281039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.281201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.281234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.281375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.281408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.281620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.281655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.281808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.281842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.282042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.282074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.282274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.282306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.282445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.282478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.282628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.282661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.282863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.282897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.283054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.683 [2024-07-24 20:24:25.283087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.683 qpair failed and we were unable to recover it. 00:30:21.683 [2024-07-24 20:24:25.283286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.283318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.283529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.283563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.283700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.283733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.283942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.283974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.284142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.284175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.284344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.284377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.284580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.284614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.284786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.284819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.684 [2024-07-24 20:24:25.284989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.285023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:21.684 [2024-07-24 20:24:25.285235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.285268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.684 [2024-07-24 20:24:25.285460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.285501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:21.684 [2024-07-24 20:24:25.285641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.285674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.285845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.285878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.286043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.286076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.286248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.286281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.286477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.286511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.286658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.286691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.286861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.286900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.287099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.287131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.287317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.287351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.287510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.287545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.287735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.287768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.287914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.287947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.288117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.288149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.288314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.288347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.288507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.288541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.288726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.288758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.288964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.288997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.289158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.289191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.289330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.289363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.289536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.289570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.289746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.289779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.289943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.289976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.290099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.290132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.290327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.290361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.290529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.684 [2024-07-24 20:24:25.290563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.684 qpair failed and we were unable to recover it. 00:30:21.684 [2024-07-24 20:24:25.290710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.290743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.290873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.290906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.291083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.291116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.291334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.291368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.291571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.291604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.291796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.291829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.291998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.292031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.292165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.292197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.292374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.292407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.292558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.292591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.292754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.685 [2024-07-24 20:24:25.292787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.292946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:21.685 [2024-07-24 20:24:25.292979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.685 [2024-07-24 20:24:25.293182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.293215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:21.685 [2024-07-24 20:24:25.293357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.293391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.293617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.293651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.293834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.293867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.294059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.294092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.294256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.294289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.294463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.294507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.294667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.294700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.294840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.294873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.295042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.295075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.295202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.295235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.295391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.295424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.295628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.295661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.295855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.295888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.296041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.296074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.296232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.296265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.296451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.296484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-24 20:24:25.296678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.685 [2024-07-24 20:24:25.296710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.296865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.296898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.297048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.297080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.297277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.297309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.297506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.297540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.297673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.297706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.297892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.297924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.298114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.298146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.298334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.298367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.298526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.298560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.298723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.298756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.298920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.298952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.299123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.299156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.299343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.299376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.299555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.299589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.299775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.299808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.299932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.299965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.300122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.300163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.300329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.300362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.300527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.300561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.300686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.300719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.686 [2024-07-24 20:24:25.300898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.300931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:21.686 [2024-07-24 20:24:25.301088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.301121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.686 [2024-07-24 20:24:25.301281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:21.686 [2024-07-24 20:24:25.301315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.301477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.301510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.301650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.301684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.301843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.301876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.302036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.302068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.302225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.302257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.302434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.302468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.302601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.302634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.302793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.302826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.303020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.303052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.303214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.303246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.303407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.303450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-24 20:24:25.303620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.686 [2024-07-24 20:24:25.303653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.687 qpair failed and we were unable to recover it. 00:30:21.687 [2024-07-24 20:24:25.303843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.687 [2024-07-24 20:24:25.303877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.687 qpair failed and we were unable to recover it. 00:30:21.687 [2024-07-24 20:24:25.304050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.687 [2024-07-24 20:24:25.304083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.687 qpair failed and we were unable to recover it. 00:30:21.687 [2024-07-24 20:24:25.304254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.687 [2024-07-24 20:24:25.304287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.687 qpair failed and we were unable to recover it. 00:30:21.687 [2024-07-24 20:24:25.304423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.687 [2024-07-24 20:24:25.304464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.687 qpair failed and we were unable to recover it. 00:30:21.687 [2024-07-24 20:24:25.304631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.687 [2024-07-24 20:24:25.304664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe954000b90 with addr=10.0.0.2, port=4420 00:30:21.687 qpair failed and we were unable to recover it. 00:30:21.687 [2024-07-24 20:24:25.304742] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:21.687 [2024-07-24 20:24:25.307321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.687 [2024-07-24 20:24:25.307493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.687 [2024-07-24 20:24:25.307534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.687 [2024-07-24 20:24:25.307556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.687 [2024-07-24 20:24:25.307572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.687 [2024-07-24 20:24:25.307617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.687 qpair failed and we were unable to recover it. 00:30:21.687 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.687 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:21.687 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.687 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:21.687 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.687 20:24:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2173106 00:30:21.687 [2024-07-24 20:24:25.317102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.687 [2024-07-24 20:24:25.317235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.687 [2024-07-24 20:24:25.317271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.687 [2024-07-24 20:24:25.317293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.687 [2024-07-24 20:24:25.317311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.687 [2024-07-24 20:24:25.317353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.687 qpair failed and we were unable to recover it. 00:30:21.687 [2024-07-24 20:24:25.327154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.687 [2024-07-24 20:24:25.327303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.687 [2024-07-24 20:24:25.327338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.687 [2024-07-24 20:24:25.327358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.687 [2024-07-24 20:24:25.327376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.687 [2024-07-24 20:24:25.327416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.687 qpair failed and we were unable to recover it. 00:30:21.687 [2024-07-24 20:24:25.337079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.687 [2024-07-24 20:24:25.337218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.687 [2024-07-24 20:24:25.337252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.687 [2024-07-24 20:24:25.337271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.687 [2024-07-24 20:24:25.337289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.687 [2024-07-24 20:24:25.337336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.687 qpair failed and we were unable to recover it. 00:30:21.687 [2024-07-24 20:24:25.347115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.687 [2024-07-24 20:24:25.347257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.687 [2024-07-24 20:24:25.347292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.687 [2024-07-24 20:24:25.347311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.687 [2024-07-24 20:24:25.347329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.687 [2024-07-24 20:24:25.347370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.687 qpair failed and we were unable to recover it. 00:30:21.687 [2024-07-24 20:24:25.357722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.687 [2024-07-24 20:24:25.357888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.687 [2024-07-24 20:24:25.357924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.687 [2024-07-24 20:24:25.357943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.687 [2024-07-24 20:24:25.357962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.687 [2024-07-24 20:24:25.358001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.687 qpair failed and we were unable to recover it. 00:30:21.687 [2024-07-24 20:24:25.367225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.687 [2024-07-24 20:24:25.367363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.687 [2024-07-24 20:24:25.367397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.687 [2024-07-24 20:24:25.367417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.687 [2024-07-24 20:24:25.367446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.687 [2024-07-24 20:24:25.367487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.687 qpair failed and we were unable to recover it. 00:30:21.687 [2024-07-24 20:24:25.377268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.687 [2024-07-24 20:24:25.377416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.687 [2024-07-24 20:24:25.377470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.687 [2024-07-24 20:24:25.377490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.687 [2024-07-24 20:24:25.377508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.687 [2024-07-24 20:24:25.377549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.687 qpair failed and we were unable to recover it. 00:30:21.687 [2024-07-24 20:24:25.387314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.687 [2024-07-24 20:24:25.387476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.687 [2024-07-24 20:24:25.387538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.687 [2024-07-24 20:24:25.387560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.687 [2024-07-24 20:24:25.387579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.687 [2024-07-24 20:24:25.387620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.687 qpair failed and we were unable to recover it. 00:30:21.687 [2024-07-24 20:24:25.397249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.687 [2024-07-24 20:24:25.397425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.687 [2024-07-24 20:24:25.397468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.687 [2024-07-24 20:24:25.397489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.687 [2024-07-24 20:24:25.397507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.687 [2024-07-24 20:24:25.397549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.687 qpair failed and we were unable to recover it. 00:30:21.687 [2024-07-24 20:24:25.407256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.687 [2024-07-24 20:24:25.407417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.687 [2024-07-24 20:24:25.407463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.688 [2024-07-24 20:24:25.407483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.688 [2024-07-24 20:24:25.407501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.688 [2024-07-24 20:24:25.407543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.688 qpair failed and we were unable to recover it. 00:30:21.688 [2024-07-24 20:24:25.417287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.688 [2024-07-24 20:24:25.417457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.688 [2024-07-24 20:24:25.417492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.688 [2024-07-24 20:24:25.417512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.688 [2024-07-24 20:24:25.417529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.688 [2024-07-24 20:24:25.417569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.688 qpair failed and we were unable to recover it. 00:30:21.688 [2024-07-24 20:24:25.427394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.688 [2024-07-24 20:24:25.427598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.688 [2024-07-24 20:24:25.427634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.688 [2024-07-24 20:24:25.427654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.688 [2024-07-24 20:24:25.427679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.688 [2024-07-24 20:24:25.427720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.688 qpair failed and we were unable to recover it. 00:30:21.948 [2024-07-24 20:24:25.437439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.948 [2024-07-24 20:24:25.437605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.948 [2024-07-24 20:24:25.437639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.948 [2024-07-24 20:24:25.437659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.948 [2024-07-24 20:24:25.437677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.948 [2024-07-24 20:24:25.437717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.948 qpair failed and we were unable to recover it. 00:30:21.948 [2024-07-24 20:24:25.447387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.948 [2024-07-24 20:24:25.447539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.948 [2024-07-24 20:24:25.447573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.948 [2024-07-24 20:24:25.447593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.948 [2024-07-24 20:24:25.447611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.948 [2024-07-24 20:24:25.447651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.948 qpair failed and we were unable to recover it. 00:30:21.948 [2024-07-24 20:24:25.457421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.948 [2024-07-24 20:24:25.457579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.948 [2024-07-24 20:24:25.457613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.948 [2024-07-24 20:24:25.457633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.948 [2024-07-24 20:24:25.457650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.948 [2024-07-24 20:24:25.457690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.948 qpair failed and we were unable to recover it. 00:30:21.948 [2024-07-24 20:24:25.467433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.948 [2024-07-24 20:24:25.467577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.948 [2024-07-24 20:24:25.467612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.948 [2024-07-24 20:24:25.467631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.948 [2024-07-24 20:24:25.467649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.948 [2024-07-24 20:24:25.467689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.948 qpair failed and we were unable to recover it. 00:30:21.948 [2024-07-24 20:24:25.477491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.948 [2024-07-24 20:24:25.477640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.948 [2024-07-24 20:24:25.477674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.948 [2024-07-24 20:24:25.477695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.948 [2024-07-24 20:24:25.477713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.948 [2024-07-24 20:24:25.477753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.948 qpair failed and we were unable to recover it. 00:30:21.948 [2024-07-24 20:24:25.487545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.948 [2024-07-24 20:24:25.487697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.948 [2024-07-24 20:24:25.487731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.948 [2024-07-24 20:24:25.487751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.948 [2024-07-24 20:24:25.487769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.948 [2024-07-24 20:24:25.487808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.948 qpair failed and we were unable to recover it. 00:30:21.948 [2024-07-24 20:24:25.497503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.948 [2024-07-24 20:24:25.497657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.948 [2024-07-24 20:24:25.497690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.948 [2024-07-24 20:24:25.497710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.948 [2024-07-24 20:24:25.497728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.948 [2024-07-24 20:24:25.497767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.948 qpair failed and we were unable to recover it. 00:30:21.948 [2024-07-24 20:24:25.507566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.948 [2024-07-24 20:24:25.507712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.948 [2024-07-24 20:24:25.507746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.948 [2024-07-24 20:24:25.507765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.948 [2024-07-24 20:24:25.507784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.948 [2024-07-24 20:24:25.507823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.948 qpair failed and we were unable to recover it. 00:30:21.948 [2024-07-24 20:24:25.517628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.948 [2024-07-24 20:24:25.517776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.948 [2024-07-24 20:24:25.517810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.948 [2024-07-24 20:24:25.517831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.948 [2024-07-24 20:24:25.517856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.948 [2024-07-24 20:24:25.517896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.948 qpair failed and we were unable to recover it. 00:30:21.948 [2024-07-24 20:24:25.527619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.948 [2024-07-24 20:24:25.527779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.948 [2024-07-24 20:24:25.527820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.948 [2024-07-24 20:24:25.527850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.948 [2024-07-24 20:24:25.527869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.948 [2024-07-24 20:24:25.527911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.948 qpair failed and we were unable to recover it. 00:30:21.948 [2024-07-24 20:24:25.537628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.948 [2024-07-24 20:24:25.537777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.948 [2024-07-24 20:24:25.537813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.948 [2024-07-24 20:24:25.537833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.948 [2024-07-24 20:24:25.537851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.949 [2024-07-24 20:24:25.537893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.949 qpair failed and we were unable to recover it. 00:30:21.949 [2024-07-24 20:24:25.547683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.949 [2024-07-24 20:24:25.547835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.949 [2024-07-24 20:24:25.547869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.949 [2024-07-24 20:24:25.547889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.949 [2024-07-24 20:24:25.547907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.949 [2024-07-24 20:24:25.547947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.949 qpair failed and we were unable to recover it. 00:30:21.949 [2024-07-24 20:24:25.557701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.949 [2024-07-24 20:24:25.557843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.949 [2024-07-24 20:24:25.557878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.949 [2024-07-24 20:24:25.557898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.949 [2024-07-24 20:24:25.557916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.949 [2024-07-24 20:24:25.557956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.949 qpair failed and we were unable to recover it. 00:30:21.949 [2024-07-24 20:24:25.567751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.949 [2024-07-24 20:24:25.567888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.949 [2024-07-24 20:24:25.567921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.949 [2024-07-24 20:24:25.567941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.949 [2024-07-24 20:24:25.567958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.949 [2024-07-24 20:24:25.567997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.949 qpair failed and we were unable to recover it. 00:30:21.949 [2024-07-24 20:24:25.577745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.949 [2024-07-24 20:24:25.577898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.949 [2024-07-24 20:24:25.577932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.949 [2024-07-24 20:24:25.577952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.949 [2024-07-24 20:24:25.577969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.949 [2024-07-24 20:24:25.578011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.949 qpair failed and we were unable to recover it. 00:30:21.949 [2024-07-24 20:24:25.587805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.949 [2024-07-24 20:24:25.587954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.949 [2024-07-24 20:24:25.587989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.949 [2024-07-24 20:24:25.588009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.949 [2024-07-24 20:24:25.588027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.949 [2024-07-24 20:24:25.588066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.949 qpair failed and we were unable to recover it. 00:30:21.949 [2024-07-24 20:24:25.597866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.949 [2024-07-24 20:24:25.597997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.949 [2024-07-24 20:24:25.598035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.949 [2024-07-24 20:24:25.598056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.949 [2024-07-24 20:24:25.598074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.949 [2024-07-24 20:24:25.598113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.949 qpair failed and we were unable to recover it. 00:30:21.949 [2024-07-24 20:24:25.607861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.949 [2024-07-24 20:24:25.608012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.949 [2024-07-24 20:24:25.608046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.949 [2024-07-24 20:24:25.608074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.949 [2024-07-24 20:24:25.608093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.949 [2024-07-24 20:24:25.608133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.949 qpair failed and we were unable to recover it. 00:30:21.949 [2024-07-24 20:24:25.617862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.949 [2024-07-24 20:24:25.618009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.949 [2024-07-24 20:24:25.618043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.949 [2024-07-24 20:24:25.618063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.949 [2024-07-24 20:24:25.618081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.949 [2024-07-24 20:24:25.618121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.949 qpair failed and we were unable to recover it. 00:30:21.949 [2024-07-24 20:24:25.627869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.949 [2024-07-24 20:24:25.628011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.949 [2024-07-24 20:24:25.628046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.949 [2024-07-24 20:24:25.628066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.949 [2024-07-24 20:24:25.628084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.949 [2024-07-24 20:24:25.628124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.949 qpair failed and we were unable to recover it. 00:30:21.949 [2024-07-24 20:24:25.637927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.949 [2024-07-24 20:24:25.638067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.949 [2024-07-24 20:24:25.638101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.949 [2024-07-24 20:24:25.638120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.949 [2024-07-24 20:24:25.638139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.949 [2024-07-24 20:24:25.638179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.949 qpair failed and we were unable to recover it. 00:30:21.949 [2024-07-24 20:24:25.647909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.949 [2024-07-24 20:24:25.648050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.949 [2024-07-24 20:24:25.648084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.949 [2024-07-24 20:24:25.648104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.949 [2024-07-24 20:24:25.648123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.949 [2024-07-24 20:24:25.648162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.949 qpair failed and we were unable to recover it. 00:30:21.949 [2024-07-24 20:24:25.658045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.949 [2024-07-24 20:24:25.658198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.949 [2024-07-24 20:24:25.658232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.949 [2024-07-24 20:24:25.658252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.949 [2024-07-24 20:24:25.658270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.949 [2024-07-24 20:24:25.658310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.950 qpair failed and we were unable to recover it. 00:30:21.950 [2024-07-24 20:24:25.667999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.950 [2024-07-24 20:24:25.668138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.950 [2024-07-24 20:24:25.668172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.950 [2024-07-24 20:24:25.668192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.950 [2024-07-24 20:24:25.668210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.950 [2024-07-24 20:24:25.668249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.950 qpair failed and we were unable to recover it. 00:30:21.950 [2024-07-24 20:24:25.678030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.950 [2024-07-24 20:24:25.678164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.950 [2024-07-24 20:24:25.678198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.950 [2024-07-24 20:24:25.678218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.950 [2024-07-24 20:24:25.678236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.950 [2024-07-24 20:24:25.678278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.950 qpair failed and we were unable to recover it. 00:30:21.950 [2024-07-24 20:24:25.688087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.950 [2024-07-24 20:24:25.688221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.950 [2024-07-24 20:24:25.688256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.950 [2024-07-24 20:24:25.688275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.950 [2024-07-24 20:24:25.688293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.950 [2024-07-24 20:24:25.688333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.950 qpair failed and we were unable to recover it. 00:30:21.950 [2024-07-24 20:24:25.698157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.950 [2024-07-24 20:24:25.698299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.950 [2024-07-24 20:24:25.698340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.950 [2024-07-24 20:24:25.698361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.950 [2024-07-24 20:24:25.698379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.950 [2024-07-24 20:24:25.698419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.950 qpair failed and we were unable to recover it. 00:30:21.950 [2024-07-24 20:24:25.708188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.950 [2024-07-24 20:24:25.708328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.950 [2024-07-24 20:24:25.708362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.950 [2024-07-24 20:24:25.708382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.950 [2024-07-24 20:24:25.708400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.950 [2024-07-24 20:24:25.708449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.950 qpair failed and we were unable to recover it. 00:30:21.950 [2024-07-24 20:24:25.718122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.950 [2024-07-24 20:24:25.718259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.950 [2024-07-24 20:24:25.718293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.950 [2024-07-24 20:24:25.718313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.950 [2024-07-24 20:24:25.718330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.950 [2024-07-24 20:24:25.718370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.950 qpair failed and we were unable to recover it. 00:30:21.950 [2024-07-24 20:24:25.728196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.950 [2024-07-24 20:24:25.728334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.950 [2024-07-24 20:24:25.728369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.950 [2024-07-24 20:24:25.728389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.950 [2024-07-24 20:24:25.728407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:21.950 [2024-07-24 20:24:25.728454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.950 qpair failed and we were unable to recover it. 00:30:22.209 [2024-07-24 20:24:25.738245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.209 [2024-07-24 20:24:25.738398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.209 [2024-07-24 20:24:25.738439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.209 [2024-07-24 20:24:25.738462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.209 [2024-07-24 20:24:25.738481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.209 [2024-07-24 20:24:25.738529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.209 qpair failed and we were unable to recover it. 00:30:22.209 [2024-07-24 20:24:25.748248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.209 [2024-07-24 20:24:25.748389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.209 [2024-07-24 20:24:25.748422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.210 [2024-07-24 20:24:25.748452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.210 [2024-07-24 20:24:25.748471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.210 [2024-07-24 20:24:25.748511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.210 qpair failed and we were unable to recover it. 00:30:22.210 [2024-07-24 20:24:25.758375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.210 [2024-07-24 20:24:25.758567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.210 [2024-07-24 20:24:25.758602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.210 [2024-07-24 20:24:25.758622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.210 [2024-07-24 20:24:25.758640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.210 [2024-07-24 20:24:25.758680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.210 qpair failed and we were unable to recover it. 00:30:22.210 [2024-07-24 20:24:25.768296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.210 [2024-07-24 20:24:25.768445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.210 [2024-07-24 20:24:25.768480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.210 [2024-07-24 20:24:25.768499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.210 [2024-07-24 20:24:25.768518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.210 [2024-07-24 20:24:25.768557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.210 qpair failed and we were unable to recover it. 00:30:22.210 [2024-07-24 20:24:25.778325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.210 [2024-07-24 20:24:25.778503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.210 [2024-07-24 20:24:25.778537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.210 [2024-07-24 20:24:25.778557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.210 [2024-07-24 20:24:25.778575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.210 [2024-07-24 20:24:25.778615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.210 qpair failed and we were unable to recover it. 00:30:22.210 [2024-07-24 20:24:25.788416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.210 [2024-07-24 20:24:25.788592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.210 [2024-07-24 20:24:25.788634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.210 [2024-07-24 20:24:25.788655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.210 [2024-07-24 20:24:25.788672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.210 [2024-07-24 20:24:25.788712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.210 qpair failed and we were unable to recover it. 00:30:22.210 [2024-07-24 20:24:25.798461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.210 [2024-07-24 20:24:25.798596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.210 [2024-07-24 20:24:25.798630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.210 [2024-07-24 20:24:25.798650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.210 [2024-07-24 20:24:25.798668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.210 [2024-07-24 20:24:25.798709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.210 qpair failed and we were unable to recover it. 00:30:22.210 [2024-07-24 20:24:25.808415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.210 [2024-07-24 20:24:25.808557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.210 [2024-07-24 20:24:25.808590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.210 [2024-07-24 20:24:25.808609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.210 [2024-07-24 20:24:25.808627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.210 [2024-07-24 20:24:25.808667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.210 qpair failed and we were unable to recover it. 00:30:22.210 [2024-07-24 20:24:25.818446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.210 [2024-07-24 20:24:25.818591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.210 [2024-07-24 20:24:25.818625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.210 [2024-07-24 20:24:25.818644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.210 [2024-07-24 20:24:25.818662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.210 [2024-07-24 20:24:25.818701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.210 qpair failed and we were unable to recover it. 00:30:22.210 [2024-07-24 20:24:25.828486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.210 [2024-07-24 20:24:25.828631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.210 [2024-07-24 20:24:25.828664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.210 [2024-07-24 20:24:25.828684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.210 [2024-07-24 20:24:25.828709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.210 [2024-07-24 20:24:25.828749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.210 qpair failed and we were unable to recover it. 00:30:22.210 [2024-07-24 20:24:25.838529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.210 [2024-07-24 20:24:25.838664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.210 [2024-07-24 20:24:25.838698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.210 [2024-07-24 20:24:25.838717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.210 [2024-07-24 20:24:25.838735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.210 [2024-07-24 20:24:25.838776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.210 qpair failed and we were unable to recover it. 00:30:22.210 [2024-07-24 20:24:25.848602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.210 [2024-07-24 20:24:25.848746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.210 [2024-07-24 20:24:25.848780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.210 [2024-07-24 20:24:25.848799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.210 [2024-07-24 20:24:25.848817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.210 [2024-07-24 20:24:25.848856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.210 qpair failed and we were unable to recover it. 00:30:22.210 [2024-07-24 20:24:25.858667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.210 [2024-07-24 20:24:25.858817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.210 [2024-07-24 20:24:25.858851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.210 [2024-07-24 20:24:25.858870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.210 [2024-07-24 20:24:25.858889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.210 [2024-07-24 20:24:25.858928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.210 qpair failed and we were unable to recover it. 00:30:22.210 [2024-07-24 20:24:25.868585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.210 [2024-07-24 20:24:25.868734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.210 [2024-07-24 20:24:25.868767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.210 [2024-07-24 20:24:25.868787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.210 [2024-07-24 20:24:25.868805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.210 [2024-07-24 20:24:25.868844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.210 qpair failed and we were unable to recover it. 00:30:22.210 [2024-07-24 20:24:25.878661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.210 [2024-07-24 20:24:25.878849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.210 [2024-07-24 20:24:25.878882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.210 [2024-07-24 20:24:25.878902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.211 [2024-07-24 20:24:25.878920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.211 [2024-07-24 20:24:25.878959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.211 qpair failed and we were unable to recover it. 00:30:22.211 [2024-07-24 20:24:25.888727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.211 [2024-07-24 20:24:25.888901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.211 [2024-07-24 20:24:25.888936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.211 [2024-07-24 20:24:25.888955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.211 [2024-07-24 20:24:25.888973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.211 [2024-07-24 20:24:25.889014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.211 qpair failed and we were unable to recover it. 00:30:22.211 [2024-07-24 20:24:25.898703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.211 [2024-07-24 20:24:25.898846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.211 [2024-07-24 20:24:25.898880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.211 [2024-07-24 20:24:25.898901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.211 [2024-07-24 20:24:25.898918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.211 [2024-07-24 20:24:25.898958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.211 qpair failed and we were unable to recover it. 00:30:22.211 [2024-07-24 20:24:25.908741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.211 [2024-07-24 20:24:25.908884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.211 [2024-07-24 20:24:25.908916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.211 [2024-07-24 20:24:25.908936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.211 [2024-07-24 20:24:25.908954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.211 [2024-07-24 20:24:25.908993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.211 qpair failed and we were unable to recover it. 00:30:22.211 [2024-07-24 20:24:25.918732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.211 [2024-07-24 20:24:25.918889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.211 [2024-07-24 20:24:25.918923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.211 [2024-07-24 20:24:25.918944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.211 [2024-07-24 20:24:25.918969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.211 [2024-07-24 20:24:25.919009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.211 qpair failed and we were unable to recover it. 00:30:22.211 [2024-07-24 20:24:25.928792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.211 [2024-07-24 20:24:25.928945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.211 [2024-07-24 20:24:25.928979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.211 [2024-07-24 20:24:25.928999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.211 [2024-07-24 20:24:25.929017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.211 [2024-07-24 20:24:25.929056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.211 qpair failed and we were unable to recover it. 00:30:22.211 [2024-07-24 20:24:25.938806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.211 [2024-07-24 20:24:25.938953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.211 [2024-07-24 20:24:25.938987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.211 [2024-07-24 20:24:25.939006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.211 [2024-07-24 20:24:25.939024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.211 [2024-07-24 20:24:25.939064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.211 qpair failed and we were unable to recover it. 00:30:22.211 [2024-07-24 20:24:25.948858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.211 [2024-07-24 20:24:25.949042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.211 [2024-07-24 20:24:25.949076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.211 [2024-07-24 20:24:25.949096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.211 [2024-07-24 20:24:25.949114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.211 [2024-07-24 20:24:25.949153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.211 qpair failed and we were unable to recover it. 00:30:22.211 [2024-07-24 20:24:25.958919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.211 [2024-07-24 20:24:25.959095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.211 [2024-07-24 20:24:25.959129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.211 [2024-07-24 20:24:25.959149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.211 [2024-07-24 20:24:25.959167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.211 [2024-07-24 20:24:25.959208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.211 qpair failed and we were unable to recover it. 00:30:22.211 [2024-07-24 20:24:25.968895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.211 [2024-07-24 20:24:25.969027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.211 [2024-07-24 20:24:25.969061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.211 [2024-07-24 20:24:25.969081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.211 [2024-07-24 20:24:25.969099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.211 [2024-07-24 20:24:25.969140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.211 qpair failed and we were unable to recover it. 00:30:22.211 [2024-07-24 20:24:25.978955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.211 [2024-07-24 20:24:25.979097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.211 [2024-07-24 20:24:25.979130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.211 [2024-07-24 20:24:25.979150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.211 [2024-07-24 20:24:25.979168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.211 [2024-07-24 20:24:25.979207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.211 qpair failed and we were unable to recover it. 00:30:22.211 [2024-07-24 20:24:25.988955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.211 [2024-07-24 20:24:25.989103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.211 [2024-07-24 20:24:25.989138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.211 [2024-07-24 20:24:25.989158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.211 [2024-07-24 20:24:25.989176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.211 [2024-07-24 20:24:25.989215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.211 qpair failed and we were unable to recover it. 00:30:22.471 [2024-07-24 20:24:25.999109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.471 [2024-07-24 20:24:25.999257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.471 [2024-07-24 20:24:25.999291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.471 [2024-07-24 20:24:25.999311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.472 [2024-07-24 20:24:25.999328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.472 [2024-07-24 20:24:25.999367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.472 qpair failed and we were unable to recover it. 00:30:22.472 [2024-07-24 20:24:26.009015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.472 [2024-07-24 20:24:26.009153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.472 [2024-07-24 20:24:26.009197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.472 [2024-07-24 20:24:26.009225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.472 [2024-07-24 20:24:26.009245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.472 [2024-07-24 20:24:26.009286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.472 qpair failed and we were unable to recover it. 00:30:22.472 [2024-07-24 20:24:26.019117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.472 [2024-07-24 20:24:26.019265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.472 [2024-07-24 20:24:26.019298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.472 [2024-07-24 20:24:26.019318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.472 [2024-07-24 20:24:26.019336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.472 [2024-07-24 20:24:26.019376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.472 qpair failed and we were unable to recover it. 00:30:22.472 [2024-07-24 20:24:26.029085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.472 [2024-07-24 20:24:26.029230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.472 [2024-07-24 20:24:26.029265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.472 [2024-07-24 20:24:26.029286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.472 [2024-07-24 20:24:26.029305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.472 [2024-07-24 20:24:26.029344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.472 qpair failed and we were unable to recover it. 00:30:22.472 [2024-07-24 20:24:26.039127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.472 [2024-07-24 20:24:26.039270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.472 [2024-07-24 20:24:26.039304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.472 [2024-07-24 20:24:26.039323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.472 [2024-07-24 20:24:26.039341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.472 [2024-07-24 20:24:26.039381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.472 qpair failed and we were unable to recover it. 00:30:22.472 [2024-07-24 20:24:26.049227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.472 [2024-07-24 20:24:26.049405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.472 [2024-07-24 20:24:26.049449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.472 [2024-07-24 20:24:26.049471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.472 [2024-07-24 20:24:26.049490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.472 [2024-07-24 20:24:26.049531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.472 qpair failed and we were unable to recover it. 00:30:22.472 [2024-07-24 20:24:26.059219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.472 [2024-07-24 20:24:26.059367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.472 [2024-07-24 20:24:26.059401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.472 [2024-07-24 20:24:26.059421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.472 [2024-07-24 20:24:26.059450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.472 [2024-07-24 20:24:26.059490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.472 qpair failed and we were unable to recover it. 00:30:22.472 [2024-07-24 20:24:26.069198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.472 [2024-07-24 20:24:26.069343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.472 [2024-07-24 20:24:26.069377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.472 [2024-07-24 20:24:26.069397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.472 [2024-07-24 20:24:26.069415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.472 [2024-07-24 20:24:26.069466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.472 qpair failed and we were unable to recover it. 00:30:22.472 [2024-07-24 20:24:26.079252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.472 [2024-07-24 20:24:26.079401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.472 [2024-07-24 20:24:26.079444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.472 [2024-07-24 20:24:26.079466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.472 [2024-07-24 20:24:26.079484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.472 [2024-07-24 20:24:26.079524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.472 qpair failed and we were unable to recover it. 00:30:22.472 [2024-07-24 20:24:26.089243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.472 [2024-07-24 20:24:26.089436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.472 [2024-07-24 20:24:26.089471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.472 [2024-07-24 20:24:26.089490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.472 [2024-07-24 20:24:26.089509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.472 [2024-07-24 20:24:26.089550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.472 qpair failed and we were unable to recover it. 00:30:22.472 [2024-07-24 20:24:26.099319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.472 [2024-07-24 20:24:26.099468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.472 [2024-07-24 20:24:26.099509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.472 [2024-07-24 20:24:26.099530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.472 [2024-07-24 20:24:26.099548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.472 [2024-07-24 20:24:26.099587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.472 qpair failed and we were unable to recover it. 00:30:22.472 [2024-07-24 20:24:26.109281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.472 [2024-07-24 20:24:26.109438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.472 [2024-07-24 20:24:26.109473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.472 [2024-07-24 20:24:26.109492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.472 [2024-07-24 20:24:26.109509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.472 [2024-07-24 20:24:26.109549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.472 qpair failed and we were unable to recover it. 00:30:22.472 [2024-07-24 20:24:26.119373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.472 [2024-07-24 20:24:26.119551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.472 [2024-07-24 20:24:26.119585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.472 [2024-07-24 20:24:26.119605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.472 [2024-07-24 20:24:26.119624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.472 [2024-07-24 20:24:26.119664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.472 qpair failed and we were unable to recover it. 00:30:22.472 [2024-07-24 20:24:26.129423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.472 [2024-07-24 20:24:26.129567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.473 [2024-07-24 20:24:26.129601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.473 [2024-07-24 20:24:26.129621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.473 [2024-07-24 20:24:26.129639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.473 [2024-07-24 20:24:26.129679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.473 qpair failed and we were unable to recover it. 00:30:22.473 [2024-07-24 20:24:26.139421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.473 [2024-07-24 20:24:26.139584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.473 [2024-07-24 20:24:26.139618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.473 [2024-07-24 20:24:26.139637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.473 [2024-07-24 20:24:26.139655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.473 [2024-07-24 20:24:26.139703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.473 qpair failed and we were unable to recover it. 00:30:22.473 [2024-07-24 20:24:26.149400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.473 [2024-07-24 20:24:26.149553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.473 [2024-07-24 20:24:26.149588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.473 [2024-07-24 20:24:26.149607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.473 [2024-07-24 20:24:26.149625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.473 [2024-07-24 20:24:26.149664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.473 qpair failed and we were unable to recover it. 00:30:22.473 [2024-07-24 20:24:26.159485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.473 [2024-07-24 20:24:26.159640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.473 [2024-07-24 20:24:26.159673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.473 [2024-07-24 20:24:26.159693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.473 [2024-07-24 20:24:26.159711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.473 [2024-07-24 20:24:26.159751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.473 qpair failed and we were unable to recover it. 00:30:22.473 [2024-07-24 20:24:26.169483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.473 [2024-07-24 20:24:26.169626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.473 [2024-07-24 20:24:26.169659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.473 [2024-07-24 20:24:26.169679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.473 [2024-07-24 20:24:26.169697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.473 [2024-07-24 20:24:26.169736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.473 qpair failed and we were unable to recover it. 00:30:22.473 [2024-07-24 20:24:26.179545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.473 [2024-07-24 20:24:26.179700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.473 [2024-07-24 20:24:26.179734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.473 [2024-07-24 20:24:26.179755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.473 [2024-07-24 20:24:26.179773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.473 [2024-07-24 20:24:26.179814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.473 qpair failed and we were unable to recover it. 00:30:22.473 [2024-07-24 20:24:26.189543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.473 [2024-07-24 20:24:26.189686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.473 [2024-07-24 20:24:26.189728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.473 [2024-07-24 20:24:26.189749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.473 [2024-07-24 20:24:26.189767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.473 [2024-07-24 20:24:26.189806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.473 qpair failed and we were unable to recover it. 00:30:22.473 [2024-07-24 20:24:26.199594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.473 [2024-07-24 20:24:26.199753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.473 [2024-07-24 20:24:26.199787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.473 [2024-07-24 20:24:26.199808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.473 [2024-07-24 20:24:26.199826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.473 [2024-07-24 20:24:26.199867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.473 qpair failed and we were unable to recover it. 00:30:22.473 [2024-07-24 20:24:26.209617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.473 [2024-07-24 20:24:26.209778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.473 [2024-07-24 20:24:26.209812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.473 [2024-07-24 20:24:26.209831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.473 [2024-07-24 20:24:26.209849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.473 [2024-07-24 20:24:26.209889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.473 qpair failed and we were unable to recover it. 00:30:22.473 [2024-07-24 20:24:26.219692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.473 [2024-07-24 20:24:26.219866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.473 [2024-07-24 20:24:26.219900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.473 [2024-07-24 20:24:26.219919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.473 [2024-07-24 20:24:26.219937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.473 [2024-07-24 20:24:26.219978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.473 qpair failed and we were unable to recover it. 00:30:22.473 [2024-07-24 20:24:26.229743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.473 [2024-07-24 20:24:26.229888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.473 [2024-07-24 20:24:26.229922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.473 [2024-07-24 20:24:26.229942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.473 [2024-07-24 20:24:26.229960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.473 [2024-07-24 20:24:26.230007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.473 qpair failed and we were unable to recover it. 00:30:22.473 [2024-07-24 20:24:26.239792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.474 [2024-07-24 20:24:26.239927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.474 [2024-07-24 20:24:26.239961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.474 [2024-07-24 20:24:26.239980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.474 [2024-07-24 20:24:26.239998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.474 [2024-07-24 20:24:26.240039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.474 qpair failed and we were unable to recover it. 00:30:22.474 [2024-07-24 20:24:26.249720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.474 [2024-07-24 20:24:26.249870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.474 [2024-07-24 20:24:26.249904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.474 [2024-07-24 20:24:26.249923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.474 [2024-07-24 20:24:26.249940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.474 [2024-07-24 20:24:26.249979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.474 qpair failed and we were unable to recover it. 00:30:22.733 [2024-07-24 20:24:26.259793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.733 [2024-07-24 20:24:26.259975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.733 [2024-07-24 20:24:26.260009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.733 [2024-07-24 20:24:26.260029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.733 [2024-07-24 20:24:26.260047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.733 [2024-07-24 20:24:26.260086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.733 qpair failed and we were unable to recover it. 00:30:22.733 [2024-07-24 20:24:26.269819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.733 [2024-07-24 20:24:26.269998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.733 [2024-07-24 20:24:26.270032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.733 [2024-07-24 20:24:26.270051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.733 [2024-07-24 20:24:26.270070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.733 [2024-07-24 20:24:26.270109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.733 qpair failed and we were unable to recover it. 00:30:22.733 [2024-07-24 20:24:26.279837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.733 [2024-07-24 20:24:26.279987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.733 [2024-07-24 20:24:26.280019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.733 [2024-07-24 20:24:26.280038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.733 [2024-07-24 20:24:26.280055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.733 [2024-07-24 20:24:26.280094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.733 qpair failed and we were unable to recover it. 00:30:22.733 [2024-07-24 20:24:26.289861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.733 [2024-07-24 20:24:26.289997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.733 [2024-07-24 20:24:26.290032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.733 [2024-07-24 20:24:26.290052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.733 [2024-07-24 20:24:26.290070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.733 [2024-07-24 20:24:26.290109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.733 qpair failed and we were unable to recover it. 00:30:22.733 [2024-07-24 20:24:26.299952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.733 [2024-07-24 20:24:26.300112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.733 [2024-07-24 20:24:26.300145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.734 [2024-07-24 20:24:26.300165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.734 [2024-07-24 20:24:26.300183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.734 [2024-07-24 20:24:26.300222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.734 qpair failed and we were unable to recover it. 00:30:22.734 [2024-07-24 20:24:26.309889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.734 [2024-07-24 20:24:26.310029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.734 [2024-07-24 20:24:26.310063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.734 [2024-07-24 20:24:26.310083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.734 [2024-07-24 20:24:26.310100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.734 [2024-07-24 20:24:26.310140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.734 qpair failed and we were unable to recover it. 00:30:22.734 [2024-07-24 20:24:26.319984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.734 [2024-07-24 20:24:26.320124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.734 [2024-07-24 20:24:26.320158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.734 [2024-07-24 20:24:26.320178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.734 [2024-07-24 20:24:26.320204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.734 [2024-07-24 20:24:26.320244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.734 qpair failed and we were unable to recover it. 00:30:22.734 [2024-07-24 20:24:26.329981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.734 [2024-07-24 20:24:26.330122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.734 [2024-07-24 20:24:26.330157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.734 [2024-07-24 20:24:26.330176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.734 [2024-07-24 20:24:26.330194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.734 [2024-07-24 20:24:26.330233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.734 qpair failed and we were unable to recover it. 00:30:22.734 [2024-07-24 20:24:26.340003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.734 [2024-07-24 20:24:26.340146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.734 [2024-07-24 20:24:26.340180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.734 [2024-07-24 20:24:26.340199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.734 [2024-07-24 20:24:26.340216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.734 [2024-07-24 20:24:26.340257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.734 qpair failed and we were unable to recover it. 00:30:22.734 [2024-07-24 20:24:26.350072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.734 [2024-07-24 20:24:26.350256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.734 [2024-07-24 20:24:26.350290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.734 [2024-07-24 20:24:26.350309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.734 [2024-07-24 20:24:26.350328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.734 [2024-07-24 20:24:26.350367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.734 qpair failed and we were unable to recover it. 00:30:22.734 [2024-07-24 20:24:26.360072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.734 [2024-07-24 20:24:26.360215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.734 [2024-07-24 20:24:26.360249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.734 [2024-07-24 20:24:26.360269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.734 [2024-07-24 20:24:26.360288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.734 [2024-07-24 20:24:26.360329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.734 qpair failed and we were unable to recover it. 00:30:22.734 [2024-07-24 20:24:26.370060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.734 [2024-07-24 20:24:26.370199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.734 [2024-07-24 20:24:26.370234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.735 [2024-07-24 20:24:26.370253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.735 [2024-07-24 20:24:26.370272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.735 [2024-07-24 20:24:26.370312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.735 qpair failed and we were unable to recover it. 00:30:22.735 [2024-07-24 20:24:26.380169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.735 [2024-07-24 20:24:26.380357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.735 [2024-07-24 20:24:26.380391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.735 [2024-07-24 20:24:26.380411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.735 [2024-07-24 20:24:26.380439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.735 [2024-07-24 20:24:26.380482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.735 qpair failed and we were unable to recover it. 00:30:22.735 [2024-07-24 20:24:26.390175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.735 [2024-07-24 20:24:26.390334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.735 [2024-07-24 20:24:26.390369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.735 [2024-07-24 20:24:26.390389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.735 [2024-07-24 20:24:26.390407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.735 [2024-07-24 20:24:26.390455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.735 qpair failed and we were unable to recover it. 00:30:22.735 [2024-07-24 20:24:26.400167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.735 [2024-07-24 20:24:26.400303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.735 [2024-07-24 20:24:26.400338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.735 [2024-07-24 20:24:26.400357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.735 [2024-07-24 20:24:26.400376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.735 [2024-07-24 20:24:26.400415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.735 qpair failed and we were unable to recover it. 00:30:22.735 [2024-07-24 20:24:26.410317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.735 [2024-07-24 20:24:26.410486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.735 [2024-07-24 20:24:26.410521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.735 [2024-07-24 20:24:26.410549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.735 [2024-07-24 20:24:26.410568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.735 [2024-07-24 20:24:26.410608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.735 qpair failed and we were unable to recover it. 00:30:22.735 [2024-07-24 20:24:26.420282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.735 [2024-07-24 20:24:26.420455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.735 [2024-07-24 20:24:26.420489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.735 [2024-07-24 20:24:26.420508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.735 [2024-07-24 20:24:26.420525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.735 [2024-07-24 20:24:26.420566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.735 qpair failed and we were unable to recover it. 00:30:22.735 [2024-07-24 20:24:26.430346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.735 [2024-07-24 20:24:26.430492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.736 [2024-07-24 20:24:26.430526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.736 [2024-07-24 20:24:26.430546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.736 [2024-07-24 20:24:26.430565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.736 [2024-07-24 20:24:26.430603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.736 qpair failed and we were unable to recover it. 00:30:22.736 [2024-07-24 20:24:26.440296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.736 [2024-07-24 20:24:26.440441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.736 [2024-07-24 20:24:26.440476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.736 [2024-07-24 20:24:26.440496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.736 [2024-07-24 20:24:26.440513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.736 [2024-07-24 20:24:26.440554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.736 qpair failed and we were unable to recover it. 00:30:22.736 [2024-07-24 20:24:26.450398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.736 [2024-07-24 20:24:26.450550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.736 [2024-07-24 20:24:26.450585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.736 [2024-07-24 20:24:26.450605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.736 [2024-07-24 20:24:26.450623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.736 [2024-07-24 20:24:26.450663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.736 qpair failed and we were unable to recover it. 00:30:22.736 [2024-07-24 20:24:26.460363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.736 [2024-07-24 20:24:26.460572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.736 [2024-07-24 20:24:26.460606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.736 [2024-07-24 20:24:26.460626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.736 [2024-07-24 20:24:26.460645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.736 [2024-07-24 20:24:26.460685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.736 qpair failed and we were unable to recover it. 00:30:22.736 [2024-07-24 20:24:26.470383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.736 [2024-07-24 20:24:26.470543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.736 [2024-07-24 20:24:26.470578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.736 [2024-07-24 20:24:26.470598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.736 [2024-07-24 20:24:26.470616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.736 [2024-07-24 20:24:26.470655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.736 qpair failed and we were unable to recover it. 00:30:22.736 [2024-07-24 20:24:26.480495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.736 [2024-07-24 20:24:26.480661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.736 [2024-07-24 20:24:26.480695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.736 [2024-07-24 20:24:26.480716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.736 [2024-07-24 20:24:26.480735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.736 [2024-07-24 20:24:26.480775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.736 qpair failed and we were unable to recover it. 00:30:22.736 [2024-07-24 20:24:26.490457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.736 [2024-07-24 20:24:26.490595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.736 [2024-07-24 20:24:26.490629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.736 [2024-07-24 20:24:26.490649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.736 [2024-07-24 20:24:26.490667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.736 [2024-07-24 20:24:26.490707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.736 qpair failed and we were unable to recover it. 00:30:22.736 [2024-07-24 20:24:26.500532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.736 [2024-07-24 20:24:26.500685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.736 [2024-07-24 20:24:26.500720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.736 [2024-07-24 20:24:26.500747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.736 [2024-07-24 20:24:26.500766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.736 [2024-07-24 20:24:26.500806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.736 qpair failed and we were unable to recover it. 00:30:22.736 [2024-07-24 20:24:26.510535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.737 [2024-07-24 20:24:26.510673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.737 [2024-07-24 20:24:26.510707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.737 [2024-07-24 20:24:26.510727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.737 [2024-07-24 20:24:26.510745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.737 [2024-07-24 20:24:26.510784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.737 qpair failed and we were unable to recover it. 00:30:22.996 [2024-07-24 20:24:26.520591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.996 [2024-07-24 20:24:26.520746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.996 [2024-07-24 20:24:26.520779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.996 [2024-07-24 20:24:26.520799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.996 [2024-07-24 20:24:26.520816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.996 [2024-07-24 20:24:26.520856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.996 qpair failed and we were unable to recover it. 00:30:22.996 [2024-07-24 20:24:26.530579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.996 [2024-07-24 20:24:26.530723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.996 [2024-07-24 20:24:26.530758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.996 [2024-07-24 20:24:26.530777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.996 [2024-07-24 20:24:26.530795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.996 [2024-07-24 20:24:26.530835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.996 qpair failed and we were unable to recover it. 00:30:22.996 [2024-07-24 20:24:26.540677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.996 [2024-07-24 20:24:26.540827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.996 [2024-07-24 20:24:26.540859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.996 [2024-07-24 20:24:26.540879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.996 [2024-07-24 20:24:26.540897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.996 [2024-07-24 20:24:26.540938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.996 qpair failed and we were unable to recover it. 00:30:22.996 [2024-07-24 20:24:26.550640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.996 [2024-07-24 20:24:26.550775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.996 [2024-07-24 20:24:26.550820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.996 [2024-07-24 20:24:26.550840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.996 [2024-07-24 20:24:26.550858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.996 [2024-07-24 20:24:26.550897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.996 qpair failed and we were unable to recover it. 00:30:22.996 [2024-07-24 20:24:26.560723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.996 [2024-07-24 20:24:26.560861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.996 [2024-07-24 20:24:26.560895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.996 [2024-07-24 20:24:26.560915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.996 [2024-07-24 20:24:26.560934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.996 [2024-07-24 20:24:26.560974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.996 qpair failed and we were unable to recover it. 00:30:22.997 [2024-07-24 20:24:26.570709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.997 [2024-07-24 20:24:26.570842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.997 [2024-07-24 20:24:26.570877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.997 [2024-07-24 20:24:26.570897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.997 [2024-07-24 20:24:26.570915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.997 [2024-07-24 20:24:26.570956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.997 qpair failed and we were unable to recover it. 00:30:22.997 [2024-07-24 20:24:26.580766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.997 [2024-07-24 20:24:26.580912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.997 [2024-07-24 20:24:26.580946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.997 [2024-07-24 20:24:26.580966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.997 [2024-07-24 20:24:26.580983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.997 [2024-07-24 20:24:26.581022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.997 qpair failed and we were unable to recover it. 00:30:22.997 [2024-07-24 20:24:26.590745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.997 [2024-07-24 20:24:26.590884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.997 [2024-07-24 20:24:26.590924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.997 [2024-07-24 20:24:26.590946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.997 [2024-07-24 20:24:26.590963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.997 [2024-07-24 20:24:26.591003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.997 qpair failed and we were unable to recover it. 00:30:22.997 [2024-07-24 20:24:26.600784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.997 [2024-07-24 20:24:26.600919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.997 [2024-07-24 20:24:26.600952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.997 [2024-07-24 20:24:26.600972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.997 [2024-07-24 20:24:26.600990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.997 [2024-07-24 20:24:26.601029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.997 qpair failed and we were unable to recover it. 00:30:22.997 [2024-07-24 20:24:26.610826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.997 [2024-07-24 20:24:26.610968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.997 [2024-07-24 20:24:26.611002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.997 [2024-07-24 20:24:26.611021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.997 [2024-07-24 20:24:26.611039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.997 [2024-07-24 20:24:26.611079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.997 qpair failed and we were unable to recover it. 00:30:22.997 [2024-07-24 20:24:26.620872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.997 [2024-07-24 20:24:26.621026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.997 [2024-07-24 20:24:26.621060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.997 [2024-07-24 20:24:26.621079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.997 [2024-07-24 20:24:26.621097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.997 [2024-07-24 20:24:26.621138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.997 qpair failed and we were unable to recover it. 00:30:22.997 [2024-07-24 20:24:26.630911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.997 [2024-07-24 20:24:26.631057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.997 [2024-07-24 20:24:26.631091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.997 [2024-07-24 20:24:26.631111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.997 [2024-07-24 20:24:26.631129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.997 [2024-07-24 20:24:26.631176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.997 qpair failed and we were unable to recover it. 00:30:22.997 [2024-07-24 20:24:26.640968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.997 [2024-07-24 20:24:26.641138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.997 [2024-07-24 20:24:26.641171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.997 [2024-07-24 20:24:26.641191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.997 [2024-07-24 20:24:26.641209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.997 [2024-07-24 20:24:26.641249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.997 qpair failed and we were unable to recover it. 00:30:22.997 [2024-07-24 20:24:26.650972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.997 [2024-07-24 20:24:26.651115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.997 [2024-07-24 20:24:26.651149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.997 [2024-07-24 20:24:26.651169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.997 [2024-07-24 20:24:26.651187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.997 [2024-07-24 20:24:26.651226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.997 qpair failed and we were unable to recover it. 00:30:22.997 [2024-07-24 20:24:26.661007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.997 [2024-07-24 20:24:26.661161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.997 [2024-07-24 20:24:26.661195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.997 [2024-07-24 20:24:26.661215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.997 [2024-07-24 20:24:26.661233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.997 [2024-07-24 20:24:26.661272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.997 qpair failed and we were unable to recover it. 00:30:22.997 [2024-07-24 20:24:26.671042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.997 [2024-07-24 20:24:26.671191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.997 [2024-07-24 20:24:26.671225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.997 [2024-07-24 20:24:26.671245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.997 [2024-07-24 20:24:26.671262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.997 [2024-07-24 20:24:26.671302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.997 qpair failed and we were unable to recover it. 00:30:22.997 [2024-07-24 20:24:26.681062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.997 [2024-07-24 20:24:26.681201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.997 [2024-07-24 20:24:26.681254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.997 [2024-07-24 20:24:26.681285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.997 [2024-07-24 20:24:26.681305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.997 [2024-07-24 20:24:26.681345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.997 qpair failed and we were unable to recover it. 00:30:22.997 [2024-07-24 20:24:26.691083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.997 [2024-07-24 20:24:26.691222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.997 [2024-07-24 20:24:26.691256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.997 [2024-07-24 20:24:26.691276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.997 [2024-07-24 20:24:26.691294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.998 [2024-07-24 20:24:26.691333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.998 qpair failed and we were unable to recover it. 00:30:22.998 [2024-07-24 20:24:26.701154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.998 [2024-07-24 20:24:26.701301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.998 [2024-07-24 20:24:26.701336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.998 [2024-07-24 20:24:26.701356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.998 [2024-07-24 20:24:26.701373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.998 [2024-07-24 20:24:26.701412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.998 qpair failed and we were unable to recover it. 00:30:22.998 [2024-07-24 20:24:26.711181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.998 [2024-07-24 20:24:26.711321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.998 [2024-07-24 20:24:26.711355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.998 [2024-07-24 20:24:26.711374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.998 [2024-07-24 20:24:26.711392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.998 [2024-07-24 20:24:26.711441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.998 qpair failed and we were unable to recover it. 00:30:22.998 [2024-07-24 20:24:26.721154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.998 [2024-07-24 20:24:26.721302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.998 [2024-07-24 20:24:26.721337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.998 [2024-07-24 20:24:26.721357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.998 [2024-07-24 20:24:26.721382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.998 [2024-07-24 20:24:26.721422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.998 qpair failed and we were unable to recover it. 00:30:22.998 [2024-07-24 20:24:26.731174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.998 [2024-07-24 20:24:26.731337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.998 [2024-07-24 20:24:26.731372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.998 [2024-07-24 20:24:26.731391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.998 [2024-07-24 20:24:26.731409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.998 [2024-07-24 20:24:26.731458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.998 qpair failed and we were unable to recover it. 00:30:22.998 [2024-07-24 20:24:26.741274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.998 [2024-07-24 20:24:26.741463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.998 [2024-07-24 20:24:26.741497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.998 [2024-07-24 20:24:26.741518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.998 [2024-07-24 20:24:26.741535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.998 [2024-07-24 20:24:26.741575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.998 qpair failed and we were unable to recover it. 00:30:22.998 [2024-07-24 20:24:26.751293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.998 [2024-07-24 20:24:26.751480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.998 [2024-07-24 20:24:26.751515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.998 [2024-07-24 20:24:26.751535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.998 [2024-07-24 20:24:26.751553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.998 [2024-07-24 20:24:26.751592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.998 qpair failed and we were unable to recover it. 00:30:22.998 [2024-07-24 20:24:26.761328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.998 [2024-07-24 20:24:26.761501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.998 [2024-07-24 20:24:26.761535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.998 [2024-07-24 20:24:26.761555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.998 [2024-07-24 20:24:26.761574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.998 [2024-07-24 20:24:26.761613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.998 qpair failed and we were unable to recover it. 00:30:22.998 [2024-07-24 20:24:26.771290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.998 [2024-07-24 20:24:26.771444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.998 [2024-07-24 20:24:26.771479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.998 [2024-07-24 20:24:26.771499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.998 [2024-07-24 20:24:26.771517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:22.998 [2024-07-24 20:24:26.771556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.998 qpair failed and we were unable to recover it. 00:30:23.257 [2024-07-24 20:24:26.781308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.257 [2024-07-24 20:24:26.781471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.258 [2024-07-24 20:24:26.781507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.258 [2024-07-24 20:24:26.781527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.258 [2024-07-24 20:24:26.781544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.258 [2024-07-24 20:24:26.781585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-07-24 20:24:26.791368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.258 [2024-07-24 20:24:26.791524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.258 [2024-07-24 20:24:26.791558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.258 [2024-07-24 20:24:26.791579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.258 [2024-07-24 20:24:26.791597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.258 [2024-07-24 20:24:26.791636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-07-24 20:24:26.801390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.258 [2024-07-24 20:24:26.801549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.258 [2024-07-24 20:24:26.801584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.258 [2024-07-24 20:24:26.801604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.258 [2024-07-24 20:24:26.801622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.258 [2024-07-24 20:24:26.801662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-07-24 20:24:26.811400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.258 [2024-07-24 20:24:26.811539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.258 [2024-07-24 20:24:26.811573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.258 [2024-07-24 20:24:26.811600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.258 [2024-07-24 20:24:26.811619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.258 [2024-07-24 20:24:26.811658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-07-24 20:24:26.821465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.258 [2024-07-24 20:24:26.821653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.258 [2024-07-24 20:24:26.821695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.258 [2024-07-24 20:24:26.821715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.258 [2024-07-24 20:24:26.821733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.258 [2024-07-24 20:24:26.821773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-07-24 20:24:26.831478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.258 [2024-07-24 20:24:26.831648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.258 [2024-07-24 20:24:26.831682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.258 [2024-07-24 20:24:26.831701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.258 [2024-07-24 20:24:26.831720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.258 [2024-07-24 20:24:26.831761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-07-24 20:24:26.841492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.258 [2024-07-24 20:24:26.841628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.258 [2024-07-24 20:24:26.841661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.258 [2024-07-24 20:24:26.841681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.258 [2024-07-24 20:24:26.841699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.258 [2024-07-24 20:24:26.841739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-07-24 20:24:26.851505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.258 [2024-07-24 20:24:26.851651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.258 [2024-07-24 20:24:26.851685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.258 [2024-07-24 20:24:26.851705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.258 [2024-07-24 20:24:26.851723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.258 [2024-07-24 20:24:26.851763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-07-24 20:24:26.861614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.258 [2024-07-24 20:24:26.861755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.258 [2024-07-24 20:24:26.861788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.258 [2024-07-24 20:24:26.861808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.258 [2024-07-24 20:24:26.861826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.258 [2024-07-24 20:24:26.861867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-07-24 20:24:26.871600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.258 [2024-07-24 20:24:26.871795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.258 [2024-07-24 20:24:26.871829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.258 [2024-07-24 20:24:26.871848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.258 [2024-07-24 20:24:26.871866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.258 [2024-07-24 20:24:26.871906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-07-24 20:24:26.881723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.258 [2024-07-24 20:24:26.881906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.258 [2024-07-24 20:24:26.881940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.258 [2024-07-24 20:24:26.881960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.258 [2024-07-24 20:24:26.881979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.258 [2024-07-24 20:24:26.882019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-07-24 20:24:26.891638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.258 [2024-07-24 20:24:26.891778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.258 [2024-07-24 20:24:26.891812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.258 [2024-07-24 20:24:26.891833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.258 [2024-07-24 20:24:26.891850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.258 [2024-07-24 20:24:26.891889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-07-24 20:24:26.901705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.258 [2024-07-24 20:24:26.901852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.258 [2024-07-24 20:24:26.901885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.258 [2024-07-24 20:24:26.901914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.258 [2024-07-24 20:24:26.901933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.258 [2024-07-24 20:24:26.901973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-07-24 20:24:26.911763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.258 [2024-07-24 20:24:26.911905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.258 [2024-07-24 20:24:26.911938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.258 [2024-07-24 20:24:26.911958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.259 [2024-07-24 20:24:26.911976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.259 [2024-07-24 20:24:26.912015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-07-24 20:24:26.921734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.259 [2024-07-24 20:24:26.921895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.259 [2024-07-24 20:24:26.921930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.259 [2024-07-24 20:24:26.921950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.259 [2024-07-24 20:24:26.921968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.259 [2024-07-24 20:24:26.922007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-07-24 20:24:26.931738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.259 [2024-07-24 20:24:26.931879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.259 [2024-07-24 20:24:26.931913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.259 [2024-07-24 20:24:26.931933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.259 [2024-07-24 20:24:26.931952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.259 [2024-07-24 20:24:26.931990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-07-24 20:24:26.941808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.259 [2024-07-24 20:24:26.941999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.259 [2024-07-24 20:24:26.942033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.259 [2024-07-24 20:24:26.942054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.259 [2024-07-24 20:24:26.942071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.259 [2024-07-24 20:24:26.942112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-07-24 20:24:26.951826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.259 [2024-07-24 20:24:26.951961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.259 [2024-07-24 20:24:26.951995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.259 [2024-07-24 20:24:26.952015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.259 [2024-07-24 20:24:26.952033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.259 [2024-07-24 20:24:26.952072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-07-24 20:24:26.961940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.259 [2024-07-24 20:24:26.962113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.259 [2024-07-24 20:24:26.962148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.259 [2024-07-24 20:24:26.962167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.259 [2024-07-24 20:24:26.962186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.259 [2024-07-24 20:24:26.962225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-07-24 20:24:26.971948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.259 [2024-07-24 20:24:26.972087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.259 [2024-07-24 20:24:26.972121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.259 [2024-07-24 20:24:26.972141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.259 [2024-07-24 20:24:26.972160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.259 [2024-07-24 20:24:26.972199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-07-24 20:24:26.981921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.259 [2024-07-24 20:24:26.982067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.259 [2024-07-24 20:24:26.982102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.259 [2024-07-24 20:24:26.982121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.259 [2024-07-24 20:24:26.982140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.259 [2024-07-24 20:24:26.982179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-07-24 20:24:26.991952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.259 [2024-07-24 20:24:26.992089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.259 [2024-07-24 20:24:26.992130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.259 [2024-07-24 20:24:26.992152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.259 [2024-07-24 20:24:26.992170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.259 [2024-07-24 20:24:26.992209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-07-24 20:24:27.001982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.259 [2024-07-24 20:24:27.002115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.259 [2024-07-24 20:24:27.002149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.259 [2024-07-24 20:24:27.002169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.259 [2024-07-24 20:24:27.002185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.259 [2024-07-24 20:24:27.002225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-07-24 20:24:27.011989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.259 [2024-07-24 20:24:27.012135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.259 [2024-07-24 20:24:27.012170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.259 [2024-07-24 20:24:27.012190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.259 [2024-07-24 20:24:27.012208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.259 [2024-07-24 20:24:27.012247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-07-24 20:24:27.022057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.259 [2024-07-24 20:24:27.022210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.259 [2024-07-24 20:24:27.022243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.259 [2024-07-24 20:24:27.022263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.259 [2024-07-24 20:24:27.022281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.259 [2024-07-24 20:24:27.022321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-07-24 20:24:27.032035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.259 [2024-07-24 20:24:27.032175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.259 [2024-07-24 20:24:27.032209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.259 [2024-07-24 20:24:27.032229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.259 [2024-07-24 20:24:27.032247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.259 [2024-07-24 20:24:27.032295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.519 [2024-07-24 20:24:27.042148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.519 [2024-07-24 20:24:27.042290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.519 [2024-07-24 20:24:27.042323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.519 [2024-07-24 20:24:27.042343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.519 [2024-07-24 20:24:27.042361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.519 [2024-07-24 20:24:27.042401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.519 qpair failed and we were unable to recover it. 00:30:23.519 [2024-07-24 20:24:27.052220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.519 [2024-07-24 20:24:27.052396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.519 [2024-07-24 20:24:27.052438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.519 [2024-07-24 20:24:27.052462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.519 [2024-07-24 20:24:27.052480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.519 [2024-07-24 20:24:27.052521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.519 qpair failed and we were unable to recover it. 00:30:23.519 [2024-07-24 20:24:27.062240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.519 [2024-07-24 20:24:27.062418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.519 [2024-07-24 20:24:27.062462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.519 [2024-07-24 20:24:27.062483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.519 [2024-07-24 20:24:27.062501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.519 [2024-07-24 20:24:27.062541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.519 qpair failed and we were unable to recover it. 00:30:23.519 [2024-07-24 20:24:27.072198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.519 [2024-07-24 20:24:27.072361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.519 [2024-07-24 20:24:27.072395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.519 [2024-07-24 20:24:27.072415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.519 [2024-07-24 20:24:27.072443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.519 [2024-07-24 20:24:27.072484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.519 qpair failed and we were unable to recover it. 00:30:23.519 [2024-07-24 20:24:27.082204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.519 [2024-07-24 20:24:27.082345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.519 [2024-07-24 20:24:27.082386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.519 [2024-07-24 20:24:27.082407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.519 [2024-07-24 20:24:27.082425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.519 [2024-07-24 20:24:27.082475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.519 qpair failed and we were unable to recover it. 00:30:23.519 [2024-07-24 20:24:27.092265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.519 [2024-07-24 20:24:27.092416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.520 [2024-07-24 20:24:27.092463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.520 [2024-07-24 20:24:27.092485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.520 [2024-07-24 20:24:27.092503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.520 [2024-07-24 20:24:27.092542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.520 qpair failed and we were unable to recover it. 00:30:23.520 [2024-07-24 20:24:27.102264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.520 [2024-07-24 20:24:27.102407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.520 [2024-07-24 20:24:27.102450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.520 [2024-07-24 20:24:27.102471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.520 [2024-07-24 20:24:27.102489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.520 [2024-07-24 20:24:27.102531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.520 qpair failed and we were unable to recover it. 00:30:23.520 [2024-07-24 20:24:27.112279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.520 [2024-07-24 20:24:27.112415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.520 [2024-07-24 20:24:27.112458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.520 [2024-07-24 20:24:27.112479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.520 [2024-07-24 20:24:27.112498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.520 [2024-07-24 20:24:27.112538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.520 qpair failed and we were unable to recover it. 00:30:23.520 [2024-07-24 20:24:27.122321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.520 [2024-07-24 20:24:27.122466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.520 [2024-07-24 20:24:27.122500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.520 [2024-07-24 20:24:27.122521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.520 [2024-07-24 20:24:27.122546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.520 [2024-07-24 20:24:27.122587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.520 qpair failed and we were unable to recover it. 00:30:23.520 [2024-07-24 20:24:27.132361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.520 [2024-07-24 20:24:27.132509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.520 [2024-07-24 20:24:27.132543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.520 [2024-07-24 20:24:27.132563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.520 [2024-07-24 20:24:27.132579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.520 [2024-07-24 20:24:27.132620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.520 qpair failed and we were unable to recover it. 00:30:23.520 [2024-07-24 20:24:27.142391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.520 [2024-07-24 20:24:27.142553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.520 [2024-07-24 20:24:27.142587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.520 [2024-07-24 20:24:27.142606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.520 [2024-07-24 20:24:27.142624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.520 [2024-07-24 20:24:27.142664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.520 qpair failed and we were unable to recover it. 00:30:23.520 [2024-07-24 20:24:27.152447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.520 [2024-07-24 20:24:27.152635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.520 [2024-07-24 20:24:27.152669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.520 [2024-07-24 20:24:27.152689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.520 [2024-07-24 20:24:27.152706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.520 [2024-07-24 20:24:27.152747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.520 qpair failed and we were unable to recover it. 00:30:23.520 [2024-07-24 20:24:27.162492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.520 [2024-07-24 20:24:27.162629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.520 [2024-07-24 20:24:27.162663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.520 [2024-07-24 20:24:27.162682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.520 [2024-07-24 20:24:27.162701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.520 [2024-07-24 20:24:27.162741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.520 qpair failed and we were unable to recover it. 00:30:23.520 [2024-07-24 20:24:27.172473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.520 [2024-07-24 20:24:27.172632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.520 [2024-07-24 20:24:27.172665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.520 [2024-07-24 20:24:27.172686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.520 [2024-07-24 20:24:27.172704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.520 [2024-07-24 20:24:27.172742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.520 qpair failed and we were unable to recover it. 00:30:23.520 [2024-07-24 20:24:27.182525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.520 [2024-07-24 20:24:27.182675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.520 [2024-07-24 20:24:27.182715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.520 [2024-07-24 20:24:27.182735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.520 [2024-07-24 20:24:27.182754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.520 [2024-07-24 20:24:27.182794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.520 qpair failed and we were unable to recover it. 00:30:23.520 [2024-07-24 20:24:27.192560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.520 [2024-07-24 20:24:27.192701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.520 [2024-07-24 20:24:27.192736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.520 [2024-07-24 20:24:27.192755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.520 [2024-07-24 20:24:27.192773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.520 [2024-07-24 20:24:27.192812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.520 qpair failed and we were unable to recover it. 00:30:23.520 [2024-07-24 20:24:27.202549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.520 [2024-07-24 20:24:27.202709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.520 [2024-07-24 20:24:27.202742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.520 [2024-07-24 20:24:27.202762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.520 [2024-07-24 20:24:27.202780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.520 [2024-07-24 20:24:27.202819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.520 qpair failed and we were unable to recover it. 00:30:23.520 [2024-07-24 20:24:27.212593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.520 [2024-07-24 20:24:27.212759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.520 [2024-07-24 20:24:27.212793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.520 [2024-07-24 20:24:27.212813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.520 [2024-07-24 20:24:27.212838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.520 [2024-07-24 20:24:27.212878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.520 qpair failed and we were unable to recover it. 00:30:23.520 [2024-07-24 20:24:27.222645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.520 [2024-07-24 20:24:27.222791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.520 [2024-07-24 20:24:27.222825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.520 [2024-07-24 20:24:27.222845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.521 [2024-07-24 20:24:27.222862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.521 [2024-07-24 20:24:27.222901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.521 qpair failed and we were unable to recover it. 00:30:23.521 [2024-07-24 20:24:27.232642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.521 [2024-07-24 20:24:27.232777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.521 [2024-07-24 20:24:27.232812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.521 [2024-07-24 20:24:27.232831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.521 [2024-07-24 20:24:27.232849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.521 [2024-07-24 20:24:27.232888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.521 qpair failed and we were unable to recover it. 00:30:23.521 [2024-07-24 20:24:27.242691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.521 [2024-07-24 20:24:27.242831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.521 [2024-07-24 20:24:27.242866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.521 [2024-07-24 20:24:27.242885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.521 [2024-07-24 20:24:27.242903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.521 [2024-07-24 20:24:27.242942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.521 qpair failed and we were unable to recover it. 00:30:23.521 [2024-07-24 20:24:27.252703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.521 [2024-07-24 20:24:27.252837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.521 [2024-07-24 20:24:27.252871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.521 [2024-07-24 20:24:27.252890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.521 [2024-07-24 20:24:27.252908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.521 [2024-07-24 20:24:27.252947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.521 qpair failed and we were unable to recover it. 00:30:23.521 [2024-07-24 20:24:27.262774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.521 [2024-07-24 20:24:27.262926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.521 [2024-07-24 20:24:27.262960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.521 [2024-07-24 20:24:27.262980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.521 [2024-07-24 20:24:27.262998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.521 [2024-07-24 20:24:27.263037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.521 qpair failed and we were unable to recover it. 00:30:23.521 [2024-07-24 20:24:27.272773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.521 [2024-07-24 20:24:27.272920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.521 [2024-07-24 20:24:27.272955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.521 [2024-07-24 20:24:27.272975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.521 [2024-07-24 20:24:27.272992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.521 [2024-07-24 20:24:27.273032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.521 qpair failed and we were unable to recover it. 00:30:23.521 [2024-07-24 20:24:27.282807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.521 [2024-07-24 20:24:27.282950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.521 [2024-07-24 20:24:27.282984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.521 [2024-07-24 20:24:27.283003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.521 [2024-07-24 20:24:27.283020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.521 [2024-07-24 20:24:27.283059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.521 qpair failed and we were unable to recover it. 00:30:23.521 [2024-07-24 20:24:27.292900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.521 [2024-07-24 20:24:27.293038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.521 [2024-07-24 20:24:27.293073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.521 [2024-07-24 20:24:27.293092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.521 [2024-07-24 20:24:27.293111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.521 [2024-07-24 20:24:27.293150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.521 qpair failed and we were unable to recover it. 00:30:23.521 [2024-07-24 20:24:27.302858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.521 [2024-07-24 20:24:27.303001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.521 [2024-07-24 20:24:27.303035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.521 [2024-07-24 20:24:27.303063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.521 [2024-07-24 20:24:27.303083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.521 [2024-07-24 20:24:27.303125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.521 qpair failed and we were unable to recover it. 00:30:23.780 [2024-07-24 20:24:27.312887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.780 [2024-07-24 20:24:27.313032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.780 [2024-07-24 20:24:27.313065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.780 [2024-07-24 20:24:27.313085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.780 [2024-07-24 20:24:27.313104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.780 [2024-07-24 20:24:27.313143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.780 qpair failed and we were unable to recover it. 00:30:23.780 [2024-07-24 20:24:27.322931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.780 [2024-07-24 20:24:27.323078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.780 [2024-07-24 20:24:27.323112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.780 [2024-07-24 20:24:27.323132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.780 [2024-07-24 20:24:27.323150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.780 [2024-07-24 20:24:27.323189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.780 qpair failed and we were unable to recover it. 00:30:23.780 [2024-07-24 20:24:27.332946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.780 [2024-07-24 20:24:27.333089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.780 [2024-07-24 20:24:27.333122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.780 [2024-07-24 20:24:27.333142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.780 [2024-07-24 20:24:27.333160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.781 [2024-07-24 20:24:27.333202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.781 qpair failed and we were unable to recover it. 00:30:23.781 [2024-07-24 20:24:27.343023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.781 [2024-07-24 20:24:27.343167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.781 [2024-07-24 20:24:27.343200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.781 [2024-07-24 20:24:27.343220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.781 [2024-07-24 20:24:27.343238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.781 [2024-07-24 20:24:27.343276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.781 qpair failed and we were unable to recover it. 00:30:23.781 [2024-07-24 20:24:27.353008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.781 [2024-07-24 20:24:27.353204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.781 [2024-07-24 20:24:27.353238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.781 [2024-07-24 20:24:27.353258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.781 [2024-07-24 20:24:27.353276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.781 [2024-07-24 20:24:27.353315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.781 qpair failed and we were unable to recover it. 00:30:23.781 [2024-07-24 20:24:27.363036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.781 [2024-07-24 20:24:27.363181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.781 [2024-07-24 20:24:27.363216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.781 [2024-07-24 20:24:27.363235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.781 [2024-07-24 20:24:27.363254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.781 [2024-07-24 20:24:27.363293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.781 qpair failed and we were unable to recover it. 00:30:23.781 [2024-07-24 20:24:27.373052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.781 [2024-07-24 20:24:27.373187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.781 [2024-07-24 20:24:27.373221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.781 [2024-07-24 20:24:27.373241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.781 [2024-07-24 20:24:27.373259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.781 [2024-07-24 20:24:27.373298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.781 qpair failed and we were unable to recover it. 00:30:23.781 [2024-07-24 20:24:27.383125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.781 [2024-07-24 20:24:27.383271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.781 [2024-07-24 20:24:27.383306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.781 [2024-07-24 20:24:27.383326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.781 [2024-07-24 20:24:27.383344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.781 [2024-07-24 20:24:27.383384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.781 qpair failed and we were unable to recover it. 00:30:23.781 [2024-07-24 20:24:27.393163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.781 [2024-07-24 20:24:27.393306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.781 [2024-07-24 20:24:27.393346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.781 [2024-07-24 20:24:27.393367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.781 [2024-07-24 20:24:27.393385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.781 [2024-07-24 20:24:27.393424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.781 qpair failed and we were unable to recover it. 00:30:23.781 [2024-07-24 20:24:27.403168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.781 [2024-07-24 20:24:27.403320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.781 [2024-07-24 20:24:27.403353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.781 [2024-07-24 20:24:27.403373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.781 [2024-07-24 20:24:27.403391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.781 [2024-07-24 20:24:27.403440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.781 qpair failed and we were unable to recover it. 00:30:23.781 [2024-07-24 20:24:27.413166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.781 [2024-07-24 20:24:27.413304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.781 [2024-07-24 20:24:27.413338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.781 [2024-07-24 20:24:27.413358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.781 [2024-07-24 20:24:27.413376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.781 [2024-07-24 20:24:27.413415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.781 qpair failed and we were unable to recover it. 00:30:23.781 [2024-07-24 20:24:27.423208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.781 [2024-07-24 20:24:27.423404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.781 [2024-07-24 20:24:27.423447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.781 [2024-07-24 20:24:27.423469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.781 [2024-07-24 20:24:27.423487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.781 [2024-07-24 20:24:27.423527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.781 qpair failed and we were unable to recover it. 00:30:23.781 [2024-07-24 20:24:27.433235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.781 [2024-07-24 20:24:27.433411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.781 [2024-07-24 20:24:27.433455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.781 [2024-07-24 20:24:27.433476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.781 [2024-07-24 20:24:27.433494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.781 [2024-07-24 20:24:27.433542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.781 qpair failed and we were unable to recover it. 00:30:23.781 [2024-07-24 20:24:27.443269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.781 [2024-07-24 20:24:27.443406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.781 [2024-07-24 20:24:27.443448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.781 [2024-07-24 20:24:27.443469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.781 [2024-07-24 20:24:27.443488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.781 [2024-07-24 20:24:27.443528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.781 qpair failed and we were unable to recover it. 00:30:23.781 [2024-07-24 20:24:27.453301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.781 [2024-07-24 20:24:27.453461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.781 [2024-07-24 20:24:27.453494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.781 [2024-07-24 20:24:27.453515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.781 [2024-07-24 20:24:27.453533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.782 [2024-07-24 20:24:27.453575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.782 qpair failed and we were unable to recover it. 00:30:23.782 [2024-07-24 20:24:27.463366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.782 [2024-07-24 20:24:27.463525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.782 [2024-07-24 20:24:27.463558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.782 [2024-07-24 20:24:27.463577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.782 [2024-07-24 20:24:27.463596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.782 [2024-07-24 20:24:27.463637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.782 qpair failed and we were unable to recover it. 00:30:23.782 [2024-07-24 20:24:27.473339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.782 [2024-07-24 20:24:27.473482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.782 [2024-07-24 20:24:27.473518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.782 [2024-07-24 20:24:27.473538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.782 [2024-07-24 20:24:27.473556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.782 [2024-07-24 20:24:27.473596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.782 qpair failed and we were unable to recover it. 00:30:23.782 [2024-07-24 20:24:27.483386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.782 [2024-07-24 20:24:27.483579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.782 [2024-07-24 20:24:27.483621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.782 [2024-07-24 20:24:27.483642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.782 [2024-07-24 20:24:27.483660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.782 [2024-07-24 20:24:27.483700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.782 qpair failed and we were unable to recover it. 00:30:23.782 [2024-07-24 20:24:27.493395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.782 [2024-07-24 20:24:27.493547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.782 [2024-07-24 20:24:27.493581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.782 [2024-07-24 20:24:27.493601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.782 [2024-07-24 20:24:27.493620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.782 [2024-07-24 20:24:27.493660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.782 qpair failed and we were unable to recover it. 00:30:23.782 [2024-07-24 20:24:27.503464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.782 [2024-07-24 20:24:27.503632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.782 [2024-07-24 20:24:27.503666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.782 [2024-07-24 20:24:27.503686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.782 [2024-07-24 20:24:27.503704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.782 [2024-07-24 20:24:27.503744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.782 qpair failed and we were unable to recover it. 00:30:23.782 [2024-07-24 20:24:27.513467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.782 [2024-07-24 20:24:27.513609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.782 [2024-07-24 20:24:27.513643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.782 [2024-07-24 20:24:27.513663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.782 [2024-07-24 20:24:27.513681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.782 [2024-07-24 20:24:27.513720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.782 qpair failed and we were unable to recover it. 00:30:23.782 [2024-07-24 20:24:27.523495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.782 [2024-07-24 20:24:27.523627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.782 [2024-07-24 20:24:27.523660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.782 [2024-07-24 20:24:27.523680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.782 [2024-07-24 20:24:27.523706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.782 [2024-07-24 20:24:27.523746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.782 qpair failed and we were unable to recover it. 00:30:23.782 [2024-07-24 20:24:27.533599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.782 [2024-07-24 20:24:27.533778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.782 [2024-07-24 20:24:27.533812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.782 [2024-07-24 20:24:27.533832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.782 [2024-07-24 20:24:27.533850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.782 [2024-07-24 20:24:27.533890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.782 qpair failed and we were unable to recover it. 00:30:23.782 [2024-07-24 20:24:27.543604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.782 [2024-07-24 20:24:27.543748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.782 [2024-07-24 20:24:27.543781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.782 [2024-07-24 20:24:27.543800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.782 [2024-07-24 20:24:27.543818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.782 [2024-07-24 20:24:27.543857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.782 qpair failed and we were unable to recover it. 00:30:23.782 [2024-07-24 20:24:27.553583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.782 [2024-07-24 20:24:27.553721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.782 [2024-07-24 20:24:27.553755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.782 [2024-07-24 20:24:27.553775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.782 [2024-07-24 20:24:27.553793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.782 [2024-07-24 20:24:27.553832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.782 qpair failed and we were unable to recover it. 00:30:23.782 [2024-07-24 20:24:27.563676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.782 [2024-07-24 20:24:27.563822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.782 [2024-07-24 20:24:27.563856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.782 [2024-07-24 20:24:27.563876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.782 [2024-07-24 20:24:27.563894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:23.782 [2024-07-24 20:24:27.563936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.782 qpair failed and we were unable to recover it. 00:30:24.042 [2024-07-24 20:24:27.573666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.042 [2024-07-24 20:24:27.573821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.042 [2024-07-24 20:24:27.573858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.042 [2024-07-24 20:24:27.573878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.042 [2024-07-24 20:24:27.573896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.042 [2024-07-24 20:24:27.573935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.042 qpair failed and we were unable to recover it. 00:30:24.042 [2024-07-24 20:24:27.583696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.042 [2024-07-24 20:24:27.583840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.042 [2024-07-24 20:24:27.583874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.042 [2024-07-24 20:24:27.583893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.042 [2024-07-24 20:24:27.583911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.042 [2024-07-24 20:24:27.583950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.042 qpair failed and we were unable to recover it. 00:30:24.042 [2024-07-24 20:24:27.593728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.042 [2024-07-24 20:24:27.593876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.042 [2024-07-24 20:24:27.593909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.042 [2024-07-24 20:24:27.593929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.042 [2024-07-24 20:24:27.593948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.042 [2024-07-24 20:24:27.593987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.042 qpair failed and we were unable to recover it. 00:30:24.042 [2024-07-24 20:24:27.603763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.042 [2024-07-24 20:24:27.603901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.042 [2024-07-24 20:24:27.603936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.042 [2024-07-24 20:24:27.603956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.042 [2024-07-24 20:24:27.603974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.042 [2024-07-24 20:24:27.604016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.042 qpair failed and we were unable to recover it. 00:30:24.042 [2024-07-24 20:24:27.613757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.042 [2024-07-24 20:24:27.613899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.042 [2024-07-24 20:24:27.613934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.042 [2024-07-24 20:24:27.613953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.042 [2024-07-24 20:24:27.613979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.042 [2024-07-24 20:24:27.614019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.042 qpair failed and we were unable to recover it. 00:30:24.042 [2024-07-24 20:24:27.623799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.042 [2024-07-24 20:24:27.623942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.042 [2024-07-24 20:24:27.623976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.042 [2024-07-24 20:24:27.623996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.042 [2024-07-24 20:24:27.624015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.042 [2024-07-24 20:24:27.624054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.042 qpair failed and we were unable to recover it. 00:30:24.042 [2024-07-24 20:24:27.633851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.042 [2024-07-24 20:24:27.633994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.042 [2024-07-24 20:24:27.634028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.042 [2024-07-24 20:24:27.634048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.042 [2024-07-24 20:24:27.634067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.042 [2024-07-24 20:24:27.634107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.042 qpair failed and we were unable to recover it. 00:30:24.042 [2024-07-24 20:24:27.643846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.042 [2024-07-24 20:24:27.643981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.042 [2024-07-24 20:24:27.644015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.042 [2024-07-24 20:24:27.644036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.042 [2024-07-24 20:24:27.644055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.042 [2024-07-24 20:24:27.644094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.042 qpair failed and we were unable to recover it. 00:30:24.042 [2024-07-24 20:24:27.653864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.042 [2024-07-24 20:24:27.654008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.042 [2024-07-24 20:24:27.654042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.042 [2024-07-24 20:24:27.654063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.042 [2024-07-24 20:24:27.654081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.042 [2024-07-24 20:24:27.654120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.042 qpair failed and we were unable to recover it. 00:30:24.042 [2024-07-24 20:24:27.663941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.042 [2024-07-24 20:24:27.664095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.042 [2024-07-24 20:24:27.664129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.042 [2024-07-24 20:24:27.664149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.042 [2024-07-24 20:24:27.664166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.042 [2024-07-24 20:24:27.664206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.043 qpair failed and we were unable to recover it. 00:30:24.043 [2024-07-24 20:24:27.673947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.043 [2024-07-24 20:24:27.674129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.043 [2024-07-24 20:24:27.674171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.043 [2024-07-24 20:24:27.674191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.043 [2024-07-24 20:24:27.674209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.043 [2024-07-24 20:24:27.674248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.043 qpair failed and we were unable to recover it. 00:30:24.043 [2024-07-24 20:24:27.683986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.043 [2024-07-24 20:24:27.684156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.043 [2024-07-24 20:24:27.684191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.043 [2024-07-24 20:24:27.684212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.043 [2024-07-24 20:24:27.684230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.043 [2024-07-24 20:24:27.684269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.043 qpair failed and we were unable to recover it. 00:30:24.043 [2024-07-24 20:24:27.694004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.043 [2024-07-24 20:24:27.694149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.043 [2024-07-24 20:24:27.694183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.043 [2024-07-24 20:24:27.694203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.043 [2024-07-24 20:24:27.694222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.043 [2024-07-24 20:24:27.694261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.043 qpair failed and we were unable to recover it. 00:30:24.043 [2024-07-24 20:24:27.704039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.043 [2024-07-24 20:24:27.704178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.043 [2024-07-24 20:24:27.704212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.043 [2024-07-24 20:24:27.704241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.043 [2024-07-24 20:24:27.704260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.043 [2024-07-24 20:24:27.704301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.043 qpair failed and we were unable to recover it. 00:30:24.043 [2024-07-24 20:24:27.714058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.043 [2024-07-24 20:24:27.714197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.043 [2024-07-24 20:24:27.714232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.043 [2024-07-24 20:24:27.714252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.043 [2024-07-24 20:24:27.714271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.043 [2024-07-24 20:24:27.714311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.043 qpair failed and we were unable to recover it. 00:30:24.043 [2024-07-24 20:24:27.724081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.043 [2024-07-24 20:24:27.724221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.043 [2024-07-24 20:24:27.724257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.043 [2024-07-24 20:24:27.724277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.043 [2024-07-24 20:24:27.724295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.043 [2024-07-24 20:24:27.724335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.043 qpair failed and we were unable to recover it. 00:30:24.043 [2024-07-24 20:24:27.734127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.043 [2024-07-24 20:24:27.734265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.043 [2024-07-24 20:24:27.734299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.043 [2024-07-24 20:24:27.734320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.043 [2024-07-24 20:24:27.734338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.043 [2024-07-24 20:24:27.734377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.043 qpair failed and we were unable to recover it. 00:30:24.043 [2024-07-24 20:24:27.744163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.043 [2024-07-24 20:24:27.744358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.043 [2024-07-24 20:24:27.744392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.043 [2024-07-24 20:24:27.744411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.043 [2024-07-24 20:24:27.744437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.043 [2024-07-24 20:24:27.744479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.043 qpair failed and we were unable to recover it. 00:30:24.043 [2024-07-24 20:24:27.754193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.043 [2024-07-24 20:24:27.754334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.043 [2024-07-24 20:24:27.754372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.043 [2024-07-24 20:24:27.754391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.043 [2024-07-24 20:24:27.754409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.043 [2024-07-24 20:24:27.754458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.043 qpair failed and we were unable to recover it. 00:30:24.043 [2024-07-24 20:24:27.764274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.043 [2024-07-24 20:24:27.764451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.043 [2024-07-24 20:24:27.764485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.043 [2024-07-24 20:24:27.764505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.043 [2024-07-24 20:24:27.764523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.043 [2024-07-24 20:24:27.764563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.043 qpair failed and we were unable to recover it. 00:30:24.043 [2024-07-24 20:24:27.774241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.043 [2024-07-24 20:24:27.774386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.043 [2024-07-24 20:24:27.774420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.043 [2024-07-24 20:24:27.774456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.043 [2024-07-24 20:24:27.774475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.043 [2024-07-24 20:24:27.774515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.043 qpair failed and we were unable to recover it. 00:30:24.043 [2024-07-24 20:24:27.784286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.043 [2024-07-24 20:24:27.784486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.043 [2024-07-24 20:24:27.784520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.043 [2024-07-24 20:24:27.784539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.044 [2024-07-24 20:24:27.784558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.044 [2024-07-24 20:24:27.784600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.044 qpair failed and we were unable to recover it. 00:30:24.044 [2024-07-24 20:24:27.794363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.044 [2024-07-24 20:24:27.794497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.044 [2024-07-24 20:24:27.794538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.044 [2024-07-24 20:24:27.794559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.044 [2024-07-24 20:24:27.794578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.044 [2024-07-24 20:24:27.794617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.044 qpair failed and we were unable to recover it. 00:30:24.044 [2024-07-24 20:24:27.804309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.044 [2024-07-24 20:24:27.804458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.044 [2024-07-24 20:24:27.804492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.044 [2024-07-24 20:24:27.804512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.044 [2024-07-24 20:24:27.804529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.044 [2024-07-24 20:24:27.804570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.044 qpair failed and we were unable to recover it. 00:30:24.044 [2024-07-24 20:24:27.814405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.044 [2024-07-24 20:24:27.814558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.044 [2024-07-24 20:24:27.814593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.044 [2024-07-24 20:24:27.814613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.044 [2024-07-24 20:24:27.814630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.044 [2024-07-24 20:24:27.814670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.044 qpair failed and we were unable to recover it. 00:30:24.044 [2024-07-24 20:24:27.824388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.044 [2024-07-24 20:24:27.824544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.044 [2024-07-24 20:24:27.824578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.044 [2024-07-24 20:24:27.824598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.044 [2024-07-24 20:24:27.824616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.044 [2024-07-24 20:24:27.824655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.044 qpair failed and we were unable to recover it. 00:30:24.304 [2024-07-24 20:24:27.834460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.304 [2024-07-24 20:24:27.834613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.304 [2024-07-24 20:24:27.834648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.304 [2024-07-24 20:24:27.834667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.304 [2024-07-24 20:24:27.834686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.304 [2024-07-24 20:24:27.834736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.304 qpair failed and we were unable to recover it. 00:30:24.304 [2024-07-24 20:24:27.844439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.304 [2024-07-24 20:24:27.844598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.304 [2024-07-24 20:24:27.844632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.304 [2024-07-24 20:24:27.844652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.304 [2024-07-24 20:24:27.844670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.304 [2024-07-24 20:24:27.844709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.304 qpair failed and we were unable to recover it. 00:30:24.304 [2024-07-24 20:24:27.854568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.304 [2024-07-24 20:24:27.854749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.304 [2024-07-24 20:24:27.854783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.304 [2024-07-24 20:24:27.854803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.304 [2024-07-24 20:24:27.854820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.304 [2024-07-24 20:24:27.854862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.304 qpair failed and we were unable to recover it. 00:30:24.304 [2024-07-24 20:24:27.864512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.304 [2024-07-24 20:24:27.864658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.304 [2024-07-24 20:24:27.864691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.304 [2024-07-24 20:24:27.864711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.304 [2024-07-24 20:24:27.864729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.304 [2024-07-24 20:24:27.864769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.304 qpair failed and we were unable to recover it. 00:30:24.304 [2024-07-24 20:24:27.874542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.304 [2024-07-24 20:24:27.874691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.304 [2024-07-24 20:24:27.874725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.304 [2024-07-24 20:24:27.874745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.304 [2024-07-24 20:24:27.874764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.304 [2024-07-24 20:24:27.874803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.304 qpair failed and we were unable to recover it. 00:30:24.304 [2024-07-24 20:24:27.884589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.304 [2024-07-24 20:24:27.884754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.304 [2024-07-24 20:24:27.884795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.304 [2024-07-24 20:24:27.884816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.304 [2024-07-24 20:24:27.884834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.304 [2024-07-24 20:24:27.884873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.304 qpair failed and we were unable to recover it. 00:30:24.304 [2024-07-24 20:24:27.894705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.304 [2024-07-24 20:24:27.894878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.304 [2024-07-24 20:24:27.894912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.304 [2024-07-24 20:24:27.894932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.304 [2024-07-24 20:24:27.894950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.304 [2024-07-24 20:24:27.894991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.304 qpair failed and we were unable to recover it. 00:30:24.304 [2024-07-24 20:24:27.904661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.304 [2024-07-24 20:24:27.904809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.304 [2024-07-24 20:24:27.904843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.304 [2024-07-24 20:24:27.904863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.304 [2024-07-24 20:24:27.904881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.304 [2024-07-24 20:24:27.904920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.304 qpair failed and we were unable to recover it. 00:30:24.304 [2024-07-24 20:24:27.914676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.304 [2024-07-24 20:24:27.914815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.304 [2024-07-24 20:24:27.914849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.304 [2024-07-24 20:24:27.914868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.304 [2024-07-24 20:24:27.914886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.304 [2024-07-24 20:24:27.914926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.304 qpair failed and we were unable to recover it. 00:30:24.304 [2024-07-24 20:24:27.924701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.304 [2024-07-24 20:24:27.924885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.304 [2024-07-24 20:24:27.924919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.304 [2024-07-24 20:24:27.924939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.304 [2024-07-24 20:24:27.924956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.304 [2024-07-24 20:24:27.925005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.304 qpair failed and we were unable to recover it. 00:30:24.304 [2024-07-24 20:24:27.934709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.305 [2024-07-24 20:24:27.934851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.305 [2024-07-24 20:24:27.934884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.305 [2024-07-24 20:24:27.934904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.305 [2024-07-24 20:24:27.934922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.305 [2024-07-24 20:24:27.934964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.305 qpair failed and we were unable to recover it. 00:30:24.305 [2024-07-24 20:24:27.944805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.305 [2024-07-24 20:24:27.944965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.305 [2024-07-24 20:24:27.944999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.305 [2024-07-24 20:24:27.945018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.305 [2024-07-24 20:24:27.945036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.305 [2024-07-24 20:24:27.945075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.305 qpair failed and we were unable to recover it. 00:30:24.305 [2024-07-24 20:24:27.954840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.305 [2024-07-24 20:24:27.954975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.305 [2024-07-24 20:24:27.955009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.305 [2024-07-24 20:24:27.955028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.305 [2024-07-24 20:24:27.955047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.305 [2024-07-24 20:24:27.955086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.305 qpair failed and we were unable to recover it. 00:30:24.305 [2024-07-24 20:24:27.964825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.305 [2024-07-24 20:24:27.964982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.305 [2024-07-24 20:24:27.965017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.305 [2024-07-24 20:24:27.965037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.305 [2024-07-24 20:24:27.965055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.305 [2024-07-24 20:24:27.965094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.305 qpair failed and we were unable to recover it. 00:30:24.305 [2024-07-24 20:24:27.974883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.305 [2024-07-24 20:24:27.975067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.305 [2024-07-24 20:24:27.975102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.305 [2024-07-24 20:24:27.975121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.305 [2024-07-24 20:24:27.975138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.305 [2024-07-24 20:24:27.975179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.305 qpair failed and we were unable to recover it. 00:30:24.305 [2024-07-24 20:24:27.984935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.305 [2024-07-24 20:24:27.985078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.305 [2024-07-24 20:24:27.985112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.305 [2024-07-24 20:24:27.985132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.305 [2024-07-24 20:24:27.985150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.305 [2024-07-24 20:24:27.985189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.305 qpair failed and we were unable to recover it. 00:30:24.305 [2024-07-24 20:24:27.994904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.305 [2024-07-24 20:24:27.995049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.305 [2024-07-24 20:24:27.995083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.305 [2024-07-24 20:24:27.995103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.305 [2024-07-24 20:24:27.995121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.305 [2024-07-24 20:24:27.995160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.305 qpair failed and we were unable to recover it. 00:30:24.305 [2024-07-24 20:24:28.004985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.305 [2024-07-24 20:24:28.005158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.305 [2024-07-24 20:24:28.005193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.305 [2024-07-24 20:24:28.005212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.305 [2024-07-24 20:24:28.005229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.305 [2024-07-24 20:24:28.005267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.305 qpair failed and we were unable to recover it. 00:30:24.305 [2024-07-24 20:24:28.014954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.305 [2024-07-24 20:24:28.015104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.305 [2024-07-24 20:24:28.015138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.305 [2024-07-24 20:24:28.015157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.305 [2024-07-24 20:24:28.015182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.305 [2024-07-24 20:24:28.015223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.305 qpair failed and we were unable to recover it. 00:30:24.305 [2024-07-24 20:24:28.025010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.305 [2024-07-24 20:24:28.025160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.305 [2024-07-24 20:24:28.025194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.305 [2024-07-24 20:24:28.025214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.305 [2024-07-24 20:24:28.025232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.305 [2024-07-24 20:24:28.025271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.305 qpair failed and we were unable to recover it. 00:30:24.305 [2024-07-24 20:24:28.035021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.305 [2024-07-24 20:24:28.035176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.305 [2024-07-24 20:24:28.035210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.305 [2024-07-24 20:24:28.035229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.305 [2024-07-24 20:24:28.035248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.305 [2024-07-24 20:24:28.035287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.305 qpair failed and we were unable to recover it. 00:30:24.305 [2024-07-24 20:24:28.045018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.305 [2024-07-24 20:24:28.045168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.305 [2024-07-24 20:24:28.045203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.305 [2024-07-24 20:24:28.045222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.305 [2024-07-24 20:24:28.045240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.305 [2024-07-24 20:24:28.045280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.305 qpair failed and we were unable to recover it. 00:30:24.305 [2024-07-24 20:24:28.055081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.305 [2024-07-24 20:24:28.055217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.305 [2024-07-24 20:24:28.055251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.306 [2024-07-24 20:24:28.055272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.306 [2024-07-24 20:24:28.055291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.306 [2024-07-24 20:24:28.055330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.306 qpair failed and we were unable to recover it. 00:30:24.306 [2024-07-24 20:24:28.065100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.306 [2024-07-24 20:24:28.065251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.306 [2024-07-24 20:24:28.065285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.306 [2024-07-24 20:24:28.065304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.306 [2024-07-24 20:24:28.065323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.306 [2024-07-24 20:24:28.065362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.306 qpair failed and we were unable to recover it. 00:30:24.306 [2024-07-24 20:24:28.075188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.306 [2024-07-24 20:24:28.075332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.306 [2024-07-24 20:24:28.075367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.306 [2024-07-24 20:24:28.075387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.306 [2024-07-24 20:24:28.075405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.306 [2024-07-24 20:24:28.075453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.306 qpair failed and we were unable to recover it. 00:30:24.306 [2024-07-24 20:24:28.085227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.306 [2024-07-24 20:24:28.085389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.306 [2024-07-24 20:24:28.085424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.306 [2024-07-24 20:24:28.085455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.306 [2024-07-24 20:24:28.085474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.306 [2024-07-24 20:24:28.085514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.306 qpair failed and we were unable to recover it. 00:30:24.565 [2024-07-24 20:24:28.095220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.565 [2024-07-24 20:24:28.095362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.565 [2024-07-24 20:24:28.095396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.565 [2024-07-24 20:24:28.095416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.565 [2024-07-24 20:24:28.095445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.565 [2024-07-24 20:24:28.095486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-07-24 20:24:28.105303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.565 [2024-07-24 20:24:28.105470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.565 [2024-07-24 20:24:28.105504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.565 [2024-07-24 20:24:28.105535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.565 [2024-07-24 20:24:28.105554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.565 [2024-07-24 20:24:28.105594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-07-24 20:24:28.115272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.565 [2024-07-24 20:24:28.115408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.565 [2024-07-24 20:24:28.115450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.565 [2024-07-24 20:24:28.115472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.565 [2024-07-24 20:24:28.115490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.565 [2024-07-24 20:24:28.115530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-07-24 20:24:28.125321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.565 [2024-07-24 20:24:28.125467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.565 [2024-07-24 20:24:28.125501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.565 [2024-07-24 20:24:28.125521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.566 [2024-07-24 20:24:28.125540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.566 [2024-07-24 20:24:28.125580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-07-24 20:24:28.135389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.566 [2024-07-24 20:24:28.135542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.566 [2024-07-24 20:24:28.135576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.566 [2024-07-24 20:24:28.135596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.566 [2024-07-24 20:24:28.135614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.566 [2024-07-24 20:24:28.135655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-07-24 20:24:28.145354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.566 [2024-07-24 20:24:28.145504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.566 [2024-07-24 20:24:28.145538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.566 [2024-07-24 20:24:28.145558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.566 [2024-07-24 20:24:28.145576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.566 [2024-07-24 20:24:28.145615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-07-24 20:24:28.155462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.566 [2024-07-24 20:24:28.155602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.566 [2024-07-24 20:24:28.155635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.566 [2024-07-24 20:24:28.155655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.566 [2024-07-24 20:24:28.155673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.566 [2024-07-24 20:24:28.155714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-07-24 20:24:28.165404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.566 [2024-07-24 20:24:28.165550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.566 [2024-07-24 20:24:28.165584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.566 [2024-07-24 20:24:28.165604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.566 [2024-07-24 20:24:28.165622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.566 [2024-07-24 20:24:28.165661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-07-24 20:24:28.175450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.566 [2024-07-24 20:24:28.175645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.566 [2024-07-24 20:24:28.175679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.566 [2024-07-24 20:24:28.175698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.566 [2024-07-24 20:24:28.175716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.566 [2024-07-24 20:24:28.175757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-07-24 20:24:28.185536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.566 [2024-07-24 20:24:28.185697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.566 [2024-07-24 20:24:28.185732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.566 [2024-07-24 20:24:28.185752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.566 [2024-07-24 20:24:28.185770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.566 [2024-07-24 20:24:28.185809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-07-24 20:24:28.195564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.566 [2024-07-24 20:24:28.195730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.566 [2024-07-24 20:24:28.195764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.566 [2024-07-24 20:24:28.195791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.566 [2024-07-24 20:24:28.195811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.566 [2024-07-24 20:24:28.195851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-07-24 20:24:28.205552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.566 [2024-07-24 20:24:28.205738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.566 [2024-07-24 20:24:28.205772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.566 [2024-07-24 20:24:28.205792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.566 [2024-07-24 20:24:28.205810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.566 [2024-07-24 20:24:28.205849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-07-24 20:24:28.215563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.566 [2024-07-24 20:24:28.215708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.566 [2024-07-24 20:24:28.215743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.566 [2024-07-24 20:24:28.215763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.566 [2024-07-24 20:24:28.215780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.566 [2024-07-24 20:24:28.215819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-07-24 20:24:28.225607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.566 [2024-07-24 20:24:28.225756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.566 [2024-07-24 20:24:28.225790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.566 [2024-07-24 20:24:28.225810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.566 [2024-07-24 20:24:28.225827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.566 [2024-07-24 20:24:28.225865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-07-24 20:24:28.235670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.566 [2024-07-24 20:24:28.235814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.566 [2024-07-24 20:24:28.235847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.566 [2024-07-24 20:24:28.235867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.566 [2024-07-24 20:24:28.235885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.566 [2024-07-24 20:24:28.235924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-07-24 20:24:28.245719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.566 [2024-07-24 20:24:28.245857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.566 [2024-07-24 20:24:28.245892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.566 [2024-07-24 20:24:28.245912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.566 [2024-07-24 20:24:28.245929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.566 [2024-07-24 20:24:28.245968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-07-24 20:24:28.255740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.566 [2024-07-24 20:24:28.255875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.566 [2024-07-24 20:24:28.255909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.566 [2024-07-24 20:24:28.255930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.567 [2024-07-24 20:24:28.255948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.567 [2024-07-24 20:24:28.255987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-07-24 20:24:28.265761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.567 [2024-07-24 20:24:28.265904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.567 [2024-07-24 20:24:28.265937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.567 [2024-07-24 20:24:28.265957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.567 [2024-07-24 20:24:28.265975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.567 [2024-07-24 20:24:28.266014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-07-24 20:24:28.275765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.567 [2024-07-24 20:24:28.275913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.567 [2024-07-24 20:24:28.275947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.567 [2024-07-24 20:24:28.275967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.567 [2024-07-24 20:24:28.275985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.567 [2024-07-24 20:24:28.276025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-07-24 20:24:28.285795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.567 [2024-07-24 20:24:28.285961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.567 [2024-07-24 20:24:28.286001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.567 [2024-07-24 20:24:28.286022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.567 [2024-07-24 20:24:28.286038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.567 [2024-07-24 20:24:28.286077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-07-24 20:24:28.295803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.567 [2024-07-24 20:24:28.295945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.567 [2024-07-24 20:24:28.295979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.567 [2024-07-24 20:24:28.295998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.567 [2024-07-24 20:24:28.296017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.567 [2024-07-24 20:24:28.296057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-07-24 20:24:28.305915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.567 [2024-07-24 20:24:28.306057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.567 [2024-07-24 20:24:28.306090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.567 [2024-07-24 20:24:28.306110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.567 [2024-07-24 20:24:28.306128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.567 [2024-07-24 20:24:28.306166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-07-24 20:24:28.315855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.567 [2024-07-24 20:24:28.316006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.567 [2024-07-24 20:24:28.316040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.567 [2024-07-24 20:24:28.316059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.567 [2024-07-24 20:24:28.316077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.567 [2024-07-24 20:24:28.316116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-07-24 20:24:28.325903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.567 [2024-07-24 20:24:28.326048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.567 [2024-07-24 20:24:28.326082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.567 [2024-07-24 20:24:28.326102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.567 [2024-07-24 20:24:28.326120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.567 [2024-07-24 20:24:28.326166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-07-24 20:24:28.336003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.567 [2024-07-24 20:24:28.336141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.567 [2024-07-24 20:24:28.336175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.567 [2024-07-24 20:24:28.336195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.567 [2024-07-24 20:24:28.336212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.567 [2024-07-24 20:24:28.336252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-07-24 20:24:28.346029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.567 [2024-07-24 20:24:28.346202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.567 [2024-07-24 20:24:28.346236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.567 [2024-07-24 20:24:28.346255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.567 [2024-07-24 20:24:28.346274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.567 [2024-07-24 20:24:28.346313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.827 [2024-07-24 20:24:28.356010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.827 [2024-07-24 20:24:28.356151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.827 [2024-07-24 20:24:28.356185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.827 [2024-07-24 20:24:28.356205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.827 [2024-07-24 20:24:28.356223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.827 [2024-07-24 20:24:28.356263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.827 qpair failed and we were unable to recover it. 00:30:24.827 [2024-07-24 20:24:28.366058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.827 [2024-07-24 20:24:28.366193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.827 [2024-07-24 20:24:28.366226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.827 [2024-07-24 20:24:28.366246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.827 [2024-07-24 20:24:28.366264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.827 [2024-07-24 20:24:28.366304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.827 qpair failed and we were unable to recover it. 00:30:24.827 [2024-07-24 20:24:28.376050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.827 [2024-07-24 20:24:28.376195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.827 [2024-07-24 20:24:28.376237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.827 [2024-07-24 20:24:28.376258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.827 [2024-07-24 20:24:28.376276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.827 [2024-07-24 20:24:28.376316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.827 qpair failed and we were unable to recover it. 00:30:24.827 [2024-07-24 20:24:28.386133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.827 [2024-07-24 20:24:28.386318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.827 [2024-07-24 20:24:28.386353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.827 [2024-07-24 20:24:28.386374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.827 [2024-07-24 20:24:28.386391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.827 [2024-07-24 20:24:28.386438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.827 qpair failed and we were unable to recover it. 00:30:24.827 [2024-07-24 20:24:28.396130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.827 [2024-07-24 20:24:28.396279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.827 [2024-07-24 20:24:28.396314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.827 [2024-07-24 20:24:28.396333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.827 [2024-07-24 20:24:28.396352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.827 [2024-07-24 20:24:28.396392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.827 qpair failed and we were unable to recover it. 00:30:24.827 [2024-07-24 20:24:28.406232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.827 [2024-07-24 20:24:28.406374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.827 [2024-07-24 20:24:28.406408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.827 [2024-07-24 20:24:28.406435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.827 [2024-07-24 20:24:28.406457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.827 [2024-07-24 20:24:28.406496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.827 qpair failed and we were unable to recover it. 00:30:24.827 [2024-07-24 20:24:28.416218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.827 [2024-07-24 20:24:28.416355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.827 [2024-07-24 20:24:28.416389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.827 [2024-07-24 20:24:28.416409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.827 [2024-07-24 20:24:28.416445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.827 [2024-07-24 20:24:28.416487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.827 qpair failed and we were unable to recover it. 00:30:24.827 [2024-07-24 20:24:28.426289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.827 [2024-07-24 20:24:28.426470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.827 [2024-07-24 20:24:28.426504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.827 [2024-07-24 20:24:28.426524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.827 [2024-07-24 20:24:28.426542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.827 [2024-07-24 20:24:28.426582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.827 qpair failed and we were unable to recover it. 00:30:24.827 [2024-07-24 20:24:28.436242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.827 [2024-07-24 20:24:28.436407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.827 [2024-07-24 20:24:28.436451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.827 [2024-07-24 20:24:28.436473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.827 [2024-07-24 20:24:28.436491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.827 [2024-07-24 20:24:28.436532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.827 qpair failed and we were unable to recover it. 00:30:24.827 [2024-07-24 20:24:28.446278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.827 [2024-07-24 20:24:28.446477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.827 [2024-07-24 20:24:28.446512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.827 [2024-07-24 20:24:28.446532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.827 [2024-07-24 20:24:28.446551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.827 [2024-07-24 20:24:28.446592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.827 qpair failed and we were unable to recover it. 00:30:24.827 [2024-07-24 20:24:28.456355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.827 [2024-07-24 20:24:28.456489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.827 [2024-07-24 20:24:28.456523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.827 [2024-07-24 20:24:28.456542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.827 [2024-07-24 20:24:28.456562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.827 [2024-07-24 20:24:28.456602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.827 qpair failed and we were unable to recover it. 00:30:24.827 [2024-07-24 20:24:28.466335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.827 [2024-07-24 20:24:28.466498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.827 [2024-07-24 20:24:28.466533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.827 [2024-07-24 20:24:28.466553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.827 [2024-07-24 20:24:28.466571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.828 [2024-07-24 20:24:28.466611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.828 qpair failed and we were unable to recover it. 00:30:24.828 [2024-07-24 20:24:28.476463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.828 [2024-07-24 20:24:28.476604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.828 [2024-07-24 20:24:28.476638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.828 [2024-07-24 20:24:28.476658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.828 [2024-07-24 20:24:28.476676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.828 [2024-07-24 20:24:28.476716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.828 qpair failed and we were unable to recover it. 00:30:24.828 [2024-07-24 20:24:28.486455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.828 [2024-07-24 20:24:28.486588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.828 [2024-07-24 20:24:28.486622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.828 [2024-07-24 20:24:28.486645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.828 [2024-07-24 20:24:28.486662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.828 [2024-07-24 20:24:28.486701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.828 qpair failed and we were unable to recover it. 00:30:24.828 [2024-07-24 20:24:28.496425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.828 [2024-07-24 20:24:28.496577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.828 [2024-07-24 20:24:28.496611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.828 [2024-07-24 20:24:28.496631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.828 [2024-07-24 20:24:28.496649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.828 [2024-07-24 20:24:28.496688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.828 qpair failed and we were unable to recover it. 00:30:24.828 [2024-07-24 20:24:28.506478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.828 [2024-07-24 20:24:28.506628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.828 [2024-07-24 20:24:28.506662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.828 [2024-07-24 20:24:28.506690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.828 [2024-07-24 20:24:28.506709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.828 [2024-07-24 20:24:28.506751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.828 qpair failed and we were unable to recover it. 00:30:24.828 [2024-07-24 20:24:28.516570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.828 [2024-07-24 20:24:28.516751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.828 [2024-07-24 20:24:28.516785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.828 [2024-07-24 20:24:28.516805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.828 [2024-07-24 20:24:28.516823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.828 [2024-07-24 20:24:28.516863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.828 qpair failed and we were unable to recover it. 00:30:24.828 [2024-07-24 20:24:28.526504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.828 [2024-07-24 20:24:28.526651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.828 [2024-07-24 20:24:28.526685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.828 [2024-07-24 20:24:28.526705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.828 [2024-07-24 20:24:28.526723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.828 [2024-07-24 20:24:28.526763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.828 qpair failed and we were unable to recover it. 00:30:24.828 [2024-07-24 20:24:28.536489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.828 [2024-07-24 20:24:28.536625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.828 [2024-07-24 20:24:28.536659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.828 [2024-07-24 20:24:28.536679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.828 [2024-07-24 20:24:28.536697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.828 [2024-07-24 20:24:28.536736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.828 qpair failed and we were unable to recover it. 00:30:24.828 [2024-07-24 20:24:28.546627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.828 [2024-07-24 20:24:28.546769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.828 [2024-07-24 20:24:28.546803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.828 [2024-07-24 20:24:28.546822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.828 [2024-07-24 20:24:28.546841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.828 [2024-07-24 20:24:28.546881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.828 qpair failed and we were unable to recover it. 00:30:24.828 [2024-07-24 20:24:28.556578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.828 [2024-07-24 20:24:28.556732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.828 [2024-07-24 20:24:28.556766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.828 [2024-07-24 20:24:28.556786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.828 [2024-07-24 20:24:28.556803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.828 [2024-07-24 20:24:28.556844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.828 qpair failed and we were unable to recover it. 00:30:24.828 [2024-07-24 20:24:28.566600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.828 [2024-07-24 20:24:28.566779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.828 [2024-07-24 20:24:28.566814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.828 [2024-07-24 20:24:28.566834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.828 [2024-07-24 20:24:28.566852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.828 [2024-07-24 20:24:28.566892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.828 qpair failed and we were unable to recover it. 00:30:24.828 [2024-07-24 20:24:28.576704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.828 [2024-07-24 20:24:28.576841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.828 [2024-07-24 20:24:28.576876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.828 [2024-07-24 20:24:28.576896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.828 [2024-07-24 20:24:28.576914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.828 [2024-07-24 20:24:28.576953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.828 qpair failed and we were unable to recover it. 00:30:24.828 [2024-07-24 20:24:28.586712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.828 [2024-07-24 20:24:28.586873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.828 [2024-07-24 20:24:28.586908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.828 [2024-07-24 20:24:28.586928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.828 [2024-07-24 20:24:28.586947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.828 [2024-07-24 20:24:28.586986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.828 qpair failed and we were unable to recover it. 00:30:24.828 [2024-07-24 20:24:28.596704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.828 [2024-07-24 20:24:28.596844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.829 [2024-07-24 20:24:28.596879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.829 [2024-07-24 20:24:28.596906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.829 [2024-07-24 20:24:28.596926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.829 [2024-07-24 20:24:28.596966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.829 qpair failed and we were unable to recover it. 00:30:24.829 [2024-07-24 20:24:28.606815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.829 [2024-07-24 20:24:28.606960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.829 [2024-07-24 20:24:28.606994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.829 [2024-07-24 20:24:28.607013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.829 [2024-07-24 20:24:28.607032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:24.829 [2024-07-24 20:24:28.607071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.829 qpair failed and we were unable to recover it. 00:30:25.088 [2024-07-24 20:24:28.616742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.088 [2024-07-24 20:24:28.616876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.088 [2024-07-24 20:24:28.616909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.088 [2024-07-24 20:24:28.616929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.088 [2024-07-24 20:24:28.616947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.088 [2024-07-24 20:24:28.616986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.088 qpair failed and we were unable to recover it. 00:30:25.088 [2024-07-24 20:24:28.626897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.088 [2024-07-24 20:24:28.627041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.088 [2024-07-24 20:24:28.627075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.088 [2024-07-24 20:24:28.627095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.088 [2024-07-24 20:24:28.627114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.088 [2024-07-24 20:24:28.627153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.088 qpair failed and we were unable to recover it. 00:30:25.088 [2024-07-24 20:24:28.636850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.088 [2024-07-24 20:24:28.636998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.088 [2024-07-24 20:24:28.637032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.088 [2024-07-24 20:24:28.637052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.088 [2024-07-24 20:24:28.637070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.088 [2024-07-24 20:24:28.637110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.088 qpair failed and we were unable to recover it. 00:30:25.088 [2024-07-24 20:24:28.646867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.088 [2024-07-24 20:24:28.647010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.088 [2024-07-24 20:24:28.647043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.088 [2024-07-24 20:24:28.647063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.088 [2024-07-24 20:24:28.647082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.088 [2024-07-24 20:24:28.647120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.088 qpair failed and we were unable to recover it. 00:30:25.088 [2024-07-24 20:24:28.656896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.088 [2024-07-24 20:24:28.657042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.088 [2024-07-24 20:24:28.657076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.088 [2024-07-24 20:24:28.657096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.088 [2024-07-24 20:24:28.657114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.088 [2024-07-24 20:24:28.657158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.088 qpair failed and we were unable to recover it. 00:30:25.088 [2024-07-24 20:24:28.666956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.088 [2024-07-24 20:24:28.667102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.088 [2024-07-24 20:24:28.667135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.088 [2024-07-24 20:24:28.667155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.088 [2024-07-24 20:24:28.667173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.088 [2024-07-24 20:24:28.667212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.088 qpair failed and we were unable to recover it. 00:30:25.088 [2024-07-24 20:24:28.676977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.088 [2024-07-24 20:24:28.677126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.088 [2024-07-24 20:24:28.677161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.088 [2024-07-24 20:24:28.677181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.089 [2024-07-24 20:24:28.677198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.089 [2024-07-24 20:24:28.677238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.089 qpair failed and we were unable to recover it. 00:30:25.089 [2024-07-24 20:24:28.687010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.089 [2024-07-24 20:24:28.687153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.089 [2024-07-24 20:24:28.687193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.089 [2024-07-24 20:24:28.687215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.089 [2024-07-24 20:24:28.687233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.089 [2024-07-24 20:24:28.687272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.089 qpair failed and we were unable to recover it. 00:30:25.089 [2024-07-24 20:24:28.697024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.089 [2024-07-24 20:24:28.697208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.089 [2024-07-24 20:24:28.697243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.089 [2024-07-24 20:24:28.697262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.089 [2024-07-24 20:24:28.697279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.089 [2024-07-24 20:24:28.697320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.089 qpair failed and we were unable to recover it. 00:30:25.089 [2024-07-24 20:24:28.707034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.089 [2024-07-24 20:24:28.707215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.089 [2024-07-24 20:24:28.707249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.089 [2024-07-24 20:24:28.707270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.089 [2024-07-24 20:24:28.707288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.089 [2024-07-24 20:24:28.707327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.089 qpair failed and we were unable to recover it. 00:30:25.089 [2024-07-24 20:24:28.717081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.089 [2024-07-24 20:24:28.717226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.089 [2024-07-24 20:24:28.717261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.089 [2024-07-24 20:24:28.717281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.089 [2024-07-24 20:24:28.717299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.089 [2024-07-24 20:24:28.717338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.089 qpair failed and we were unable to recover it. 00:30:25.089 [2024-07-24 20:24:28.727095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.089 [2024-07-24 20:24:28.727235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.089 [2024-07-24 20:24:28.727269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.089 [2024-07-24 20:24:28.727289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.089 [2024-07-24 20:24:28.727307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.089 [2024-07-24 20:24:28.727355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.089 qpair failed and we were unable to recover it. 00:30:25.089 [2024-07-24 20:24:28.737133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.089 [2024-07-24 20:24:28.737287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.089 [2024-07-24 20:24:28.737322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.089 [2024-07-24 20:24:28.737342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.089 [2024-07-24 20:24:28.737360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.089 [2024-07-24 20:24:28.737400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.089 qpair failed and we were unable to recover it. 00:30:25.089 [2024-07-24 20:24:28.747227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.089 [2024-07-24 20:24:28.747378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.089 [2024-07-24 20:24:28.747413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.089 [2024-07-24 20:24:28.747440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.089 [2024-07-24 20:24:28.747461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.089 [2024-07-24 20:24:28.747501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.089 qpair failed and we were unable to recover it. 00:30:25.089 [2024-07-24 20:24:28.757186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.089 [2024-07-24 20:24:28.757332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.089 [2024-07-24 20:24:28.757366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.089 [2024-07-24 20:24:28.757386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.089 [2024-07-24 20:24:28.757405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.089 [2024-07-24 20:24:28.757452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.089 qpair failed and we were unable to recover it. 00:30:25.089 [2024-07-24 20:24:28.767237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.089 [2024-07-24 20:24:28.767380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.089 [2024-07-24 20:24:28.767413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.089 [2024-07-24 20:24:28.767456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.090 [2024-07-24 20:24:28.767478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.090 [2024-07-24 20:24:28.767517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.090 qpair failed and we were unable to recover it. 00:30:25.090 [2024-07-24 20:24:28.777255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.090 [2024-07-24 20:24:28.777412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.090 [2024-07-24 20:24:28.777463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.090 [2024-07-24 20:24:28.777485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.090 [2024-07-24 20:24:28.777504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.090 [2024-07-24 20:24:28.777544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.090 qpair failed and we were unable to recover it. 00:30:25.090 [2024-07-24 20:24:28.787284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.090 [2024-07-24 20:24:28.787426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.090 [2024-07-24 20:24:28.787472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.090 [2024-07-24 20:24:28.787492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.090 [2024-07-24 20:24:28.787510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.090 [2024-07-24 20:24:28.787552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.090 qpair failed and we were unable to recover it. 00:30:25.090 [2024-07-24 20:24:28.797343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.090 [2024-07-24 20:24:28.797489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.090 [2024-07-24 20:24:28.797524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.090 [2024-07-24 20:24:28.797543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.090 [2024-07-24 20:24:28.797562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.090 [2024-07-24 20:24:28.797602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.090 qpair failed and we were unable to recover it. 00:30:25.090 [2024-07-24 20:24:28.807315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.090 [2024-07-24 20:24:28.807462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.090 [2024-07-24 20:24:28.807496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.090 [2024-07-24 20:24:28.807516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.090 [2024-07-24 20:24:28.807535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.090 [2024-07-24 20:24:28.807575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.090 qpair failed and we were unable to recover it. 00:30:25.090 [2024-07-24 20:24:28.817347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.090 [2024-07-24 20:24:28.817500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.090 [2024-07-24 20:24:28.817534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.090 [2024-07-24 20:24:28.817554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.090 [2024-07-24 20:24:28.817582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.090 [2024-07-24 20:24:28.817624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.090 qpair failed and we were unable to recover it. 00:30:25.090 [2024-07-24 20:24:28.827401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.090 [2024-07-24 20:24:28.827554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.090 [2024-07-24 20:24:28.827588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.090 [2024-07-24 20:24:28.827607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.090 [2024-07-24 20:24:28.827625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.090 [2024-07-24 20:24:28.827664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.090 qpair failed and we were unable to recover it. 00:30:25.090 [2024-07-24 20:24:28.837450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.090 [2024-07-24 20:24:28.837598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.090 [2024-07-24 20:24:28.837632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.090 [2024-07-24 20:24:28.837652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.090 [2024-07-24 20:24:28.837669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.090 [2024-07-24 20:24:28.837708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.090 qpair failed and we were unable to recover it. 00:30:25.090 [2024-07-24 20:24:28.847580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.090 [2024-07-24 20:24:28.847746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.090 [2024-07-24 20:24:28.847779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.090 [2024-07-24 20:24:28.847799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.090 [2024-07-24 20:24:28.847817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.090 [2024-07-24 20:24:28.847857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.090 qpair failed and we were unable to recover it. 00:30:25.090 [2024-07-24 20:24:28.857499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.090 [2024-07-24 20:24:28.857642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.090 [2024-07-24 20:24:28.857676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.091 [2024-07-24 20:24:28.857696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.091 [2024-07-24 20:24:28.857715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.091 [2024-07-24 20:24:28.857755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.091 qpair failed and we were unable to recover it. 00:30:25.091 [2024-07-24 20:24:28.867541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.091 [2024-07-24 20:24:28.867691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.091 [2024-07-24 20:24:28.867726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.091 [2024-07-24 20:24:28.867746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.091 [2024-07-24 20:24:28.867764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.091 [2024-07-24 20:24:28.867805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.091 qpair failed and we were unable to recover it. 00:30:25.350 [2024-07-24 20:24:28.877532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.350 [2024-07-24 20:24:28.877669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.350 [2024-07-24 20:24:28.877703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.350 [2024-07-24 20:24:28.877723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.350 [2024-07-24 20:24:28.877740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.350 [2024-07-24 20:24:28.877781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.350 qpair failed and we were unable to recover it. 00:30:25.350 [2024-07-24 20:24:28.887583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.350 [2024-07-24 20:24:28.887715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.350 [2024-07-24 20:24:28.887750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.350 [2024-07-24 20:24:28.887769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.350 [2024-07-24 20:24:28.887788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.350 [2024-07-24 20:24:28.887828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.350 qpair failed and we were unable to recover it. 00:30:25.350 [2024-07-24 20:24:28.897615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.350 [2024-07-24 20:24:28.897754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.350 [2024-07-24 20:24:28.897789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.350 [2024-07-24 20:24:28.897809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.350 [2024-07-24 20:24:28.897828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.350 [2024-07-24 20:24:28.897870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.350 qpair failed and we were unable to recover it. 00:30:25.350 [2024-07-24 20:24:28.907692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.350 [2024-07-24 20:24:28.907887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.350 [2024-07-24 20:24:28.907921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.350 [2024-07-24 20:24:28.907941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.350 [2024-07-24 20:24:28.907966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.350 [2024-07-24 20:24:28.908007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.350 qpair failed and we were unable to recover it. 00:30:25.350 [2024-07-24 20:24:28.917659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.350 [2024-07-24 20:24:28.917796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.350 [2024-07-24 20:24:28.917830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.350 [2024-07-24 20:24:28.917850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.350 [2024-07-24 20:24:28.917866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.350 [2024-07-24 20:24:28.917906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.350 qpair failed and we were unable to recover it. 00:30:25.350 [2024-07-24 20:24:28.927708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.350 [2024-07-24 20:24:28.927843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.350 [2024-07-24 20:24:28.927878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.350 [2024-07-24 20:24:28.927898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.350 [2024-07-24 20:24:28.927916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.350 [2024-07-24 20:24:28.927959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.350 qpair failed and we were unable to recover it. 00:30:25.350 [2024-07-24 20:24:28.937727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.350 [2024-07-24 20:24:28.937868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.350 [2024-07-24 20:24:28.937905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.350 [2024-07-24 20:24:28.937925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.350 [2024-07-24 20:24:28.937943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.350 [2024-07-24 20:24:28.937983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.350 qpair failed and we were unable to recover it. 00:30:25.350 [2024-07-24 20:24:28.947778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.350 [2024-07-24 20:24:28.947926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.350 [2024-07-24 20:24:28.947960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.350 [2024-07-24 20:24:28.947979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.350 [2024-07-24 20:24:28.947997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.350 [2024-07-24 20:24:28.948036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.350 qpair failed and we were unable to recover it. 00:30:25.350 [2024-07-24 20:24:28.957779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.350 [2024-07-24 20:24:28.957921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.350 [2024-07-24 20:24:28.957955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.350 [2024-07-24 20:24:28.957974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.350 [2024-07-24 20:24:28.957992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.350 [2024-07-24 20:24:28.958031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.350 qpair failed and we were unable to recover it. 00:30:25.350 [2024-07-24 20:24:28.967851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.350 [2024-07-24 20:24:28.967986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.350 [2024-07-24 20:24:28.968020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.350 [2024-07-24 20:24:28.968040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.350 [2024-07-24 20:24:28.968058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.350 [2024-07-24 20:24:28.968098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.350 qpair failed and we were unable to recover it. 00:30:25.350 [2024-07-24 20:24:28.977846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.350 [2024-07-24 20:24:28.977997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.350 [2024-07-24 20:24:28.978031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.350 [2024-07-24 20:24:28.978051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.350 [2024-07-24 20:24:28.978068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.350 [2024-07-24 20:24:28.978109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.350 qpair failed and we were unable to recover it. 00:30:25.350 [2024-07-24 20:24:28.988001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.350 [2024-07-24 20:24:28.988162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.350 [2024-07-24 20:24:28.988196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.350 [2024-07-24 20:24:28.988216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.350 [2024-07-24 20:24:28.988235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.351 [2024-07-24 20:24:28.988274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.351 qpair failed and we were unable to recover it. 00:30:25.351 [2024-07-24 20:24:28.997947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.351 [2024-07-24 20:24:28.998118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.351 [2024-07-24 20:24:28.998152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.351 [2024-07-24 20:24:28.998181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.351 [2024-07-24 20:24:28.998200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.351 [2024-07-24 20:24:28.998241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.351 qpair failed and we were unable to recover it. 00:30:25.351 [2024-07-24 20:24:29.007958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.351 [2024-07-24 20:24:29.008103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.351 [2024-07-24 20:24:29.008138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.351 [2024-07-24 20:24:29.008157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.351 [2024-07-24 20:24:29.008174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.351 [2024-07-24 20:24:29.008213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.351 qpair failed and we were unable to recover it. 00:30:25.351 [2024-07-24 20:24:29.017964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.351 [2024-07-24 20:24:29.018123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.351 [2024-07-24 20:24:29.018157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.351 [2024-07-24 20:24:29.018177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.351 [2024-07-24 20:24:29.018195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.351 [2024-07-24 20:24:29.018234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.351 qpair failed and we were unable to recover it. 00:30:25.351 [2024-07-24 20:24:29.028020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.351 [2024-07-24 20:24:29.028169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.351 [2024-07-24 20:24:29.028203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.351 [2024-07-24 20:24:29.028222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.351 [2024-07-24 20:24:29.028240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.351 [2024-07-24 20:24:29.028281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.351 qpair failed and we were unable to recover it. 00:30:25.351 [2024-07-24 20:24:29.038093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.351 [2024-07-24 20:24:29.038239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.351 [2024-07-24 20:24:29.038274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.351 [2024-07-24 20:24:29.038293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.351 [2024-07-24 20:24:29.038311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.351 [2024-07-24 20:24:29.038352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.351 qpair failed and we were unable to recover it. 00:30:25.351 [2024-07-24 20:24:29.048057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.351 [2024-07-24 20:24:29.048197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.351 [2024-07-24 20:24:29.048231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.351 [2024-07-24 20:24:29.048252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.351 [2024-07-24 20:24:29.048270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.351 [2024-07-24 20:24:29.048309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.351 qpair failed and we were unable to recover it. 00:30:25.351 [2024-07-24 20:24:29.058067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.351 [2024-07-24 20:24:29.058205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.351 [2024-07-24 20:24:29.058239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.351 [2024-07-24 20:24:29.058258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.351 [2024-07-24 20:24:29.058276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.351 [2024-07-24 20:24:29.058315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.351 qpair failed and we were unable to recover it. 00:30:25.351 [2024-07-24 20:24:29.068141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.351 [2024-07-24 20:24:29.068285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.351 [2024-07-24 20:24:29.068319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.351 [2024-07-24 20:24:29.068338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.351 [2024-07-24 20:24:29.068356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.351 [2024-07-24 20:24:29.068395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.351 qpair failed and we were unable to recover it. 00:30:25.351 [2024-07-24 20:24:29.078234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.351 [2024-07-24 20:24:29.078388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.351 [2024-07-24 20:24:29.078422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.351 [2024-07-24 20:24:29.078451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.351 [2024-07-24 20:24:29.078469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.351 [2024-07-24 20:24:29.078509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.351 qpair failed and we were unable to recover it. 00:30:25.351 [2024-07-24 20:24:29.088170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.351 [2024-07-24 20:24:29.088303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.351 [2024-07-24 20:24:29.088344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.351 [2024-07-24 20:24:29.088365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.351 [2024-07-24 20:24:29.088384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.351 [2024-07-24 20:24:29.088423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.351 qpair failed and we were unable to recover it. 00:30:25.351 [2024-07-24 20:24:29.098198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.351 [2024-07-24 20:24:29.098331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.351 [2024-07-24 20:24:29.098364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.351 [2024-07-24 20:24:29.098384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.351 [2024-07-24 20:24:29.098402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.351 [2024-07-24 20:24:29.098450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.351 qpair failed and we were unable to recover it. 00:30:25.351 [2024-07-24 20:24:29.108229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.351 [2024-07-24 20:24:29.108375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.351 [2024-07-24 20:24:29.108409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.351 [2024-07-24 20:24:29.108437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.351 [2024-07-24 20:24:29.108457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.351 [2024-07-24 20:24:29.108498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.351 qpair failed and we were unable to recover it. 00:30:25.351 [2024-07-24 20:24:29.118249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.351 [2024-07-24 20:24:29.118411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.351 [2024-07-24 20:24:29.118453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.351 [2024-07-24 20:24:29.118474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.351 [2024-07-24 20:24:29.118493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.351 [2024-07-24 20:24:29.118532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.351 qpair failed and we were unable to recover it. 00:30:25.352 [2024-07-24 20:24:29.128288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.352 [2024-07-24 20:24:29.128434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.352 [2024-07-24 20:24:29.128468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.352 [2024-07-24 20:24:29.128488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.352 [2024-07-24 20:24:29.128506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.352 [2024-07-24 20:24:29.128554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.352 qpair failed and we were unable to recover it. 00:30:25.611 [2024-07-24 20:24:29.138317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.611 [2024-07-24 20:24:29.138460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.611 [2024-07-24 20:24:29.138494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.611 [2024-07-24 20:24:29.138514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.611 [2024-07-24 20:24:29.138532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.611 [2024-07-24 20:24:29.138572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.611 qpair failed and we were unable to recover it. 00:30:25.611 [2024-07-24 20:24:29.148349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.611 [2024-07-24 20:24:29.148496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.611 [2024-07-24 20:24:29.148531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.611 [2024-07-24 20:24:29.148551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.611 [2024-07-24 20:24:29.148569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.611 [2024-07-24 20:24:29.148609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.611 qpair failed and we were unable to recover it. 00:30:25.611 [2024-07-24 20:24:29.158380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.611 [2024-07-24 20:24:29.158528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.611 [2024-07-24 20:24:29.158561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.611 [2024-07-24 20:24:29.158582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.611 [2024-07-24 20:24:29.158600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.611 [2024-07-24 20:24:29.158639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.611 qpair failed and we were unable to recover it. 00:30:25.611 [2024-07-24 20:24:29.168466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.611 [2024-07-24 20:24:29.168608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.611 [2024-07-24 20:24:29.168643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.611 [2024-07-24 20:24:29.168662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.611 [2024-07-24 20:24:29.168680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.611 [2024-07-24 20:24:29.168721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.611 qpair failed and we were unable to recover it. 00:30:25.611 [2024-07-24 20:24:29.178516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.611 [2024-07-24 20:24:29.178658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.611 [2024-07-24 20:24:29.178699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.611 [2024-07-24 20:24:29.178721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.611 [2024-07-24 20:24:29.178740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.611 [2024-07-24 20:24:29.178780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.611 qpair failed and we were unable to recover it. 00:30:25.611 [2024-07-24 20:24:29.188496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.611 [2024-07-24 20:24:29.188665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.611 [2024-07-24 20:24:29.188700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.611 [2024-07-24 20:24:29.188720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.611 [2024-07-24 20:24:29.188739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.611 [2024-07-24 20:24:29.188779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.611 qpair failed and we were unable to recover it. 00:30:25.611 [2024-07-24 20:24:29.198504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.611 [2024-07-24 20:24:29.198648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.611 [2024-07-24 20:24:29.198681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.611 [2024-07-24 20:24:29.198701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.611 [2024-07-24 20:24:29.198719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.611 [2024-07-24 20:24:29.198759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.611 qpair failed and we were unable to recover it. 00:30:25.611 [2024-07-24 20:24:29.208532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.611 [2024-07-24 20:24:29.208676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.611 [2024-07-24 20:24:29.208711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.611 [2024-07-24 20:24:29.208731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.611 [2024-07-24 20:24:29.208749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.611 [2024-07-24 20:24:29.208789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.611 qpair failed and we were unable to recover it. 00:30:25.611 [2024-07-24 20:24:29.218574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.611 [2024-07-24 20:24:29.218725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.611 [2024-07-24 20:24:29.218759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.611 [2024-07-24 20:24:29.218779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.611 [2024-07-24 20:24:29.218805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.611 [2024-07-24 20:24:29.218846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.611 qpair failed and we were unable to recover it. 00:30:25.612 [2024-07-24 20:24:29.228663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.612 [2024-07-24 20:24:29.228817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.612 [2024-07-24 20:24:29.228850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.612 [2024-07-24 20:24:29.228870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.612 [2024-07-24 20:24:29.228888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.612 [2024-07-24 20:24:29.228927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.612 qpair failed and we were unable to recover it. 00:30:25.612 [2024-07-24 20:24:29.238636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.612 [2024-07-24 20:24:29.238816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.612 [2024-07-24 20:24:29.238850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.612 [2024-07-24 20:24:29.238869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.612 [2024-07-24 20:24:29.238887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.612 [2024-07-24 20:24:29.238927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.612 qpair failed and we were unable to recover it. 00:30:25.612 [2024-07-24 20:24:29.248749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.612 [2024-07-24 20:24:29.248885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.612 [2024-07-24 20:24:29.248919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.612 [2024-07-24 20:24:29.248940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.612 [2024-07-24 20:24:29.248958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.612 [2024-07-24 20:24:29.248997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.612 qpair failed and we were unable to recover it. 00:30:25.612 [2024-07-24 20:24:29.258669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.612 [2024-07-24 20:24:29.258837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.612 [2024-07-24 20:24:29.258870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.612 [2024-07-24 20:24:29.258890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.612 [2024-07-24 20:24:29.258908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.612 [2024-07-24 20:24:29.258949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.612 qpair failed and we were unable to recover it. 00:30:25.612 [2024-07-24 20:24:29.268712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.612 [2024-07-24 20:24:29.268879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.612 [2024-07-24 20:24:29.268913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.612 [2024-07-24 20:24:29.268933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.612 [2024-07-24 20:24:29.268951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.612 [2024-07-24 20:24:29.268990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.612 qpair failed and we were unable to recover it. 00:30:25.612 [2024-07-24 20:24:29.278741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.612 [2024-07-24 20:24:29.278878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.612 [2024-07-24 20:24:29.278912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.612 [2024-07-24 20:24:29.278932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.612 [2024-07-24 20:24:29.278950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.612 [2024-07-24 20:24:29.278989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.612 qpair failed and we were unable to recover it. 00:30:25.612 [2024-07-24 20:24:29.288752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.612 [2024-07-24 20:24:29.288924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.612 [2024-07-24 20:24:29.288957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.612 [2024-07-24 20:24:29.288976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.612 [2024-07-24 20:24:29.288993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.612 [2024-07-24 20:24:29.289032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.612 qpair failed and we were unable to recover it. 00:30:25.612 [2024-07-24 20:24:29.298784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.612 [2024-07-24 20:24:29.298921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.612 [2024-07-24 20:24:29.298955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.612 [2024-07-24 20:24:29.298975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.612 [2024-07-24 20:24:29.298993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.612 [2024-07-24 20:24:29.299032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.612 qpair failed and we were unable to recover it. 00:30:25.612 [2024-07-24 20:24:29.308860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.612 [2024-07-24 20:24:29.309006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.612 [2024-07-24 20:24:29.309039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.612 [2024-07-24 20:24:29.309059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.612 [2024-07-24 20:24:29.309084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.612 [2024-07-24 20:24:29.309124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.612 qpair failed and we were unable to recover it. 00:30:25.612 [2024-07-24 20:24:29.318844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.612 [2024-07-24 20:24:29.318987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.612 [2024-07-24 20:24:29.319020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.612 [2024-07-24 20:24:29.319040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.612 [2024-07-24 20:24:29.319058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.612 [2024-07-24 20:24:29.319098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.612 qpair failed and we were unable to recover it. 00:30:25.612 [2024-07-24 20:24:29.328866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.612 [2024-07-24 20:24:29.329019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.612 [2024-07-24 20:24:29.329054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.612 [2024-07-24 20:24:29.329075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.612 [2024-07-24 20:24:29.329093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.612 [2024-07-24 20:24:29.329133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.612 qpair failed and we were unable to recover it. 00:30:25.612 [2024-07-24 20:24:29.338912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.612 [2024-07-24 20:24:29.339065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.612 [2024-07-24 20:24:29.339099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.612 [2024-07-24 20:24:29.339118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.612 [2024-07-24 20:24:29.339137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.612 [2024-07-24 20:24:29.339176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.612 qpair failed and we were unable to recover it. 00:30:25.612 [2024-07-24 20:24:29.348946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.612 [2024-07-24 20:24:29.349100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.612 [2024-07-24 20:24:29.349134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.612 [2024-07-24 20:24:29.349154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.612 [2024-07-24 20:24:29.349172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.612 [2024-07-24 20:24:29.349211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.612 qpair failed and we were unable to recover it. 00:30:25.612 [2024-07-24 20:24:29.358953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.612 [2024-07-24 20:24:29.359098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.613 [2024-07-24 20:24:29.359132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.613 [2024-07-24 20:24:29.359152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.613 [2024-07-24 20:24:29.359170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.613 [2024-07-24 20:24:29.359211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.613 qpair failed and we were unable to recover it. 00:30:25.613 [2024-07-24 20:24:29.369022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.613 [2024-07-24 20:24:29.369213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.613 [2024-07-24 20:24:29.369248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.613 [2024-07-24 20:24:29.369268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.613 [2024-07-24 20:24:29.369287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.613 [2024-07-24 20:24:29.369325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.613 qpair failed and we were unable to recover it. 00:30:25.613 [2024-07-24 20:24:29.379000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.613 [2024-07-24 20:24:29.379170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.613 [2024-07-24 20:24:29.379205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.613 [2024-07-24 20:24:29.379225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.613 [2024-07-24 20:24:29.379242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.613 [2024-07-24 20:24:29.379282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.613 qpair failed and we were unable to recover it. 00:30:25.613 [2024-07-24 20:24:29.389069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.613 [2024-07-24 20:24:29.389211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.613 [2024-07-24 20:24:29.389246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.613 [2024-07-24 20:24:29.389266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.613 [2024-07-24 20:24:29.389285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.613 [2024-07-24 20:24:29.389324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.613 qpair failed and we were unable to recover it. 00:30:25.872 [2024-07-24 20:24:29.399114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.872 [2024-07-24 20:24:29.399253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.872 [2024-07-24 20:24:29.399287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.872 [2024-07-24 20:24:29.399316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.872 [2024-07-24 20:24:29.399336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.872 [2024-07-24 20:24:29.399377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.872 qpair failed and we were unable to recover it. 00:30:25.872 [2024-07-24 20:24:29.409226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.872 [2024-07-24 20:24:29.409390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.872 [2024-07-24 20:24:29.409425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.872 [2024-07-24 20:24:29.409456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.872 [2024-07-24 20:24:29.409475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.872 [2024-07-24 20:24:29.409515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.872 qpair failed and we were unable to recover it. 00:30:25.872 [2024-07-24 20:24:29.419135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.872 [2024-07-24 20:24:29.419269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.872 [2024-07-24 20:24:29.419303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.872 [2024-07-24 20:24:29.419323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.872 [2024-07-24 20:24:29.419342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.872 [2024-07-24 20:24:29.419382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.872 qpair failed and we were unable to recover it. 00:30:25.872 [2024-07-24 20:24:29.429180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.872 [2024-07-24 20:24:29.429327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.872 [2024-07-24 20:24:29.429361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.872 [2024-07-24 20:24:29.429381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.872 [2024-07-24 20:24:29.429399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.872 [2024-07-24 20:24:29.429448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.872 qpair failed and we were unable to recover it. 00:30:25.872 [2024-07-24 20:24:29.439207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.872 [2024-07-24 20:24:29.439354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.873 [2024-07-24 20:24:29.439388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.873 [2024-07-24 20:24:29.439408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.873 [2024-07-24 20:24:29.439426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.873 [2024-07-24 20:24:29.439482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.873 qpair failed and we were unable to recover it. 00:30:25.873 [2024-07-24 20:24:29.449248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.873 [2024-07-24 20:24:29.449391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.873 [2024-07-24 20:24:29.449425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.873 [2024-07-24 20:24:29.449455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.873 [2024-07-24 20:24:29.449475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.873 [2024-07-24 20:24:29.449514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.873 qpair failed and we were unable to recover it. 00:30:25.873 [2024-07-24 20:24:29.459294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.873 [2024-07-24 20:24:29.459471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.873 [2024-07-24 20:24:29.459505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.873 [2024-07-24 20:24:29.459525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.873 [2024-07-24 20:24:29.459544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.873 [2024-07-24 20:24:29.459585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.873 qpair failed and we were unable to recover it. 00:30:25.873 [2024-07-24 20:24:29.469312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.873 [2024-07-24 20:24:29.469465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.873 [2024-07-24 20:24:29.469500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.873 [2024-07-24 20:24:29.469520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.873 [2024-07-24 20:24:29.469538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.873 [2024-07-24 20:24:29.469577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.873 qpair failed and we were unable to recover it. 00:30:25.873 [2024-07-24 20:24:29.479368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.873 [2024-07-24 20:24:29.479525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.873 [2024-07-24 20:24:29.479559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.873 [2024-07-24 20:24:29.479580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.873 [2024-07-24 20:24:29.479598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.873 [2024-07-24 20:24:29.479639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.873 qpair failed and we were unable to recover it. 00:30:25.873 [2024-07-24 20:24:29.489375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.873 [2024-07-24 20:24:29.489528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.873 [2024-07-24 20:24:29.489570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.873 [2024-07-24 20:24:29.489591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.873 [2024-07-24 20:24:29.489609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.873 [2024-07-24 20:24:29.489649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.873 qpair failed and we were unable to recover it. 00:30:25.873 [2024-07-24 20:24:29.499393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.873 [2024-07-24 20:24:29.499540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.873 [2024-07-24 20:24:29.499576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.873 [2024-07-24 20:24:29.499596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.873 [2024-07-24 20:24:29.499615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.873 [2024-07-24 20:24:29.499656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.873 qpair failed and we were unable to recover it. 00:30:25.873 [2024-07-24 20:24:29.509437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.873 [2024-07-24 20:24:29.509583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.873 [2024-07-24 20:24:29.509616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.873 [2024-07-24 20:24:29.509637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.873 [2024-07-24 20:24:29.509655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.873 [2024-07-24 20:24:29.509694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.873 qpair failed and we were unable to recover it. 00:30:25.873 [2024-07-24 20:24:29.519449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.873 [2024-07-24 20:24:29.519616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.873 [2024-07-24 20:24:29.519648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.873 [2024-07-24 20:24:29.519668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.873 [2024-07-24 20:24:29.519687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.873 [2024-07-24 20:24:29.519727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.873 qpair failed and we were unable to recover it. 00:30:25.873 [2024-07-24 20:24:29.529486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.873 [2024-07-24 20:24:29.529655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.873 [2024-07-24 20:24:29.529689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.873 [2024-07-24 20:24:29.529708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.873 [2024-07-24 20:24:29.529727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.873 [2024-07-24 20:24:29.529775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.873 qpair failed and we were unable to recover it. 00:30:25.873 [2024-07-24 20:24:29.539533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.873 [2024-07-24 20:24:29.539665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.873 [2024-07-24 20:24:29.539699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.873 [2024-07-24 20:24:29.539719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.873 [2024-07-24 20:24:29.539738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.873 [2024-07-24 20:24:29.539778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.873 qpair failed and we were unable to recover it. 00:30:25.873 [2024-07-24 20:24:29.549603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.873 [2024-07-24 20:24:29.549793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.873 [2024-07-24 20:24:29.549827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.873 [2024-07-24 20:24:29.549847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.873 [2024-07-24 20:24:29.549865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.873 [2024-07-24 20:24:29.549905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.873 qpair failed and we were unable to recover it. 00:30:25.873 [2024-07-24 20:24:29.559645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.873 [2024-07-24 20:24:29.559785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.873 [2024-07-24 20:24:29.559819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.873 [2024-07-24 20:24:29.559839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.873 [2024-07-24 20:24:29.559857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.873 [2024-07-24 20:24:29.559897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.873 qpair failed and we were unable to recover it. 00:30:25.873 [2024-07-24 20:24:29.569616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.873 [2024-07-24 20:24:29.569749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.873 [2024-07-24 20:24:29.569783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.873 [2024-07-24 20:24:29.569802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.874 [2024-07-24 20:24:29.569820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.874 [2024-07-24 20:24:29.569859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.874 qpair failed and we were unable to recover it. 00:30:25.874 [2024-07-24 20:24:29.579635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.874 [2024-07-24 20:24:29.579795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.874 [2024-07-24 20:24:29.579839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.874 [2024-07-24 20:24:29.579859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.874 [2024-07-24 20:24:29.579878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.874 [2024-07-24 20:24:29.579917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.874 qpair failed and we were unable to recover it. 00:30:25.874 [2024-07-24 20:24:29.589697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.874 [2024-07-24 20:24:29.589837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.874 [2024-07-24 20:24:29.589871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.874 [2024-07-24 20:24:29.589891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.874 [2024-07-24 20:24:29.589909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.874 [2024-07-24 20:24:29.589949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.874 qpair failed and we were unable to recover it. 00:30:25.874 [2024-07-24 20:24:29.599705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.874 [2024-07-24 20:24:29.599845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.874 [2024-07-24 20:24:29.599879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.874 [2024-07-24 20:24:29.599899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.874 [2024-07-24 20:24:29.599917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.874 [2024-07-24 20:24:29.599956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.874 qpair failed and we were unable to recover it. 00:30:25.874 [2024-07-24 20:24:29.609808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.874 [2024-07-24 20:24:29.609994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.874 [2024-07-24 20:24:29.610027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.874 [2024-07-24 20:24:29.610047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.874 [2024-07-24 20:24:29.610065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.874 [2024-07-24 20:24:29.610105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.874 qpair failed and we were unable to recover it. 00:30:25.874 [2024-07-24 20:24:29.619760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.874 [2024-07-24 20:24:29.619898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.874 [2024-07-24 20:24:29.619932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.874 [2024-07-24 20:24:29.619952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.874 [2024-07-24 20:24:29.619970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.874 [2024-07-24 20:24:29.620017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.874 qpair failed and we were unable to recover it. 00:30:25.874 [2024-07-24 20:24:29.629828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.874 [2024-07-24 20:24:29.629977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.874 [2024-07-24 20:24:29.630011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.874 [2024-07-24 20:24:29.630030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.874 [2024-07-24 20:24:29.630048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.874 [2024-07-24 20:24:29.630087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.874 qpair failed and we were unable to recover it. 00:30:25.874 [2024-07-24 20:24:29.639817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.874 [2024-07-24 20:24:29.639973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.874 [2024-07-24 20:24:29.640007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.874 [2024-07-24 20:24:29.640027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.874 [2024-07-24 20:24:29.640045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.874 [2024-07-24 20:24:29.640084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.874 qpair failed and we were unable to recover it. 00:30:25.874 [2024-07-24 20:24:29.649859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.874 [2024-07-24 20:24:29.649991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.874 [2024-07-24 20:24:29.650025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.874 [2024-07-24 20:24:29.650045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.874 [2024-07-24 20:24:29.650062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:25.874 [2024-07-24 20:24:29.650101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:25.874 qpair failed and we were unable to recover it. 00:30:26.133 [2024-07-24 20:24:29.659863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.133 [2024-07-24 20:24:29.660005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.133 [2024-07-24 20:24:29.660038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.133 [2024-07-24 20:24:29.660058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.133 [2024-07-24 20:24:29.660075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.133 [2024-07-24 20:24:29.660116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.133 qpair failed and we were unable to recover it. 00:30:26.133 [2024-07-24 20:24:29.669941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.133 [2024-07-24 20:24:29.670110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.133 [2024-07-24 20:24:29.670144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.133 [2024-07-24 20:24:29.670164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.133 [2024-07-24 20:24:29.670182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.133 [2024-07-24 20:24:29.670221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.133 qpair failed and we were unable to recover it. 00:30:26.133 [2024-07-24 20:24:29.679940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.133 [2024-07-24 20:24:29.680077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.133 [2024-07-24 20:24:29.680111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.133 [2024-07-24 20:24:29.680131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.133 [2024-07-24 20:24:29.680149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.133 [2024-07-24 20:24:29.680191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.133 qpair failed and we were unable to recover it. 00:30:26.133 [2024-07-24 20:24:29.689991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.133 [2024-07-24 20:24:29.690138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.133 [2024-07-24 20:24:29.690172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.133 [2024-07-24 20:24:29.690192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.133 [2024-07-24 20:24:29.690210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.133 [2024-07-24 20:24:29.690249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.133 qpair failed and we were unable to recover it. 00:30:26.133 [2024-07-24 20:24:29.699982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.133 [2024-07-24 20:24:29.700169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.133 [2024-07-24 20:24:29.700203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.133 [2024-07-24 20:24:29.700223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.133 [2024-07-24 20:24:29.700241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.133 [2024-07-24 20:24:29.700281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.134 qpair failed and we were unable to recover it. 00:30:26.134 [2024-07-24 20:24:29.710060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.134 [2024-07-24 20:24:29.710209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.134 [2024-07-24 20:24:29.710242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.134 [2024-07-24 20:24:29.710262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.134 [2024-07-24 20:24:29.710287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.134 [2024-07-24 20:24:29.710328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.134 qpair failed and we were unable to recover it. 00:30:26.134 [2024-07-24 20:24:29.720059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.134 [2024-07-24 20:24:29.720204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.134 [2024-07-24 20:24:29.720238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.134 [2024-07-24 20:24:29.720257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.134 [2024-07-24 20:24:29.720276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.134 [2024-07-24 20:24:29.720316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.134 qpair failed and we were unable to recover it. 00:30:26.134 [2024-07-24 20:24:29.730111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.134 [2024-07-24 20:24:29.730264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.134 [2024-07-24 20:24:29.730298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.134 [2024-07-24 20:24:29.730317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.134 [2024-07-24 20:24:29.730335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.134 [2024-07-24 20:24:29.730374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.134 qpair failed and we were unable to recover it. 00:30:26.134 [2024-07-24 20:24:29.740124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.134 [2024-07-24 20:24:29.740278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.134 [2024-07-24 20:24:29.740312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.134 [2024-07-24 20:24:29.740332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.134 [2024-07-24 20:24:29.740350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.134 [2024-07-24 20:24:29.740390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.134 qpair failed and we were unable to recover it. 00:30:26.134 [2024-07-24 20:24:29.750153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.134 [2024-07-24 20:24:29.750294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.134 [2024-07-24 20:24:29.750328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.134 [2024-07-24 20:24:29.750348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.134 [2024-07-24 20:24:29.750367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.134 [2024-07-24 20:24:29.750405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.134 qpair failed and we were unable to recover it. 00:30:26.134 [2024-07-24 20:24:29.760205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.134 [2024-07-24 20:24:29.760345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.134 [2024-07-24 20:24:29.760380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.134 [2024-07-24 20:24:29.760400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.134 [2024-07-24 20:24:29.760418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.134 [2024-07-24 20:24:29.760470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.134 qpair failed and we were unable to recover it. 00:30:26.134 [2024-07-24 20:24:29.770186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.134 [2024-07-24 20:24:29.770319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.134 [2024-07-24 20:24:29.770353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.134 [2024-07-24 20:24:29.770373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.134 [2024-07-24 20:24:29.770391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.134 [2024-07-24 20:24:29.770438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.134 qpair failed and we were unable to recover it. 00:30:26.134 [2024-07-24 20:24:29.780207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.134 [2024-07-24 20:24:29.780378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.134 [2024-07-24 20:24:29.780412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.134 [2024-07-24 20:24:29.780442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.134 [2024-07-24 20:24:29.780463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.134 [2024-07-24 20:24:29.780503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.134 qpair failed and we were unable to recover it. 00:30:26.134 [2024-07-24 20:24:29.790259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.134 [2024-07-24 20:24:29.790422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.134 [2024-07-24 20:24:29.790464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.134 [2024-07-24 20:24:29.790485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.134 [2024-07-24 20:24:29.790504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.134 [2024-07-24 20:24:29.790543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.134 qpair failed and we were unable to recover it. 00:30:26.134 [2024-07-24 20:24:29.800267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.134 [2024-07-24 20:24:29.800406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.134 [2024-07-24 20:24:29.800449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.134 [2024-07-24 20:24:29.800477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.134 [2024-07-24 20:24:29.800496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.134 [2024-07-24 20:24:29.800536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.134 qpair failed and we were unable to recover it. 00:30:26.134 [2024-07-24 20:24:29.810307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.134 [2024-07-24 20:24:29.810449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.134 [2024-07-24 20:24:29.810483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.134 [2024-07-24 20:24:29.810503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.134 [2024-07-24 20:24:29.810522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.134 [2024-07-24 20:24:29.810561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.134 qpair failed and we were unable to recover it. 00:30:26.134 [2024-07-24 20:24:29.820330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.134 [2024-07-24 20:24:29.820473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.134 [2024-07-24 20:24:29.820507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.134 [2024-07-24 20:24:29.820527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.134 [2024-07-24 20:24:29.820544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.134 [2024-07-24 20:24:29.820584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.134 qpair failed and we were unable to recover it. 00:30:26.134 [2024-07-24 20:24:29.830423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.134 [2024-07-24 20:24:29.830581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.134 [2024-07-24 20:24:29.830614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.134 [2024-07-24 20:24:29.830634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.134 [2024-07-24 20:24:29.830652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.134 [2024-07-24 20:24:29.830693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.135 qpair failed and we were unable to recover it. 00:30:26.135 [2024-07-24 20:24:29.840390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.135 [2024-07-24 20:24:29.840555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.135 [2024-07-24 20:24:29.840590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.135 [2024-07-24 20:24:29.840610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.135 [2024-07-24 20:24:29.840628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.135 [2024-07-24 20:24:29.840668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.135 qpair failed and we were unable to recover it. 00:30:26.135 [2024-07-24 20:24:29.850399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.135 [2024-07-24 20:24:29.850555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.135 [2024-07-24 20:24:29.850589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.135 [2024-07-24 20:24:29.850609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.135 [2024-07-24 20:24:29.850628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.135 [2024-07-24 20:24:29.850667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.135 qpair failed and we were unable to recover it. 00:30:26.135 [2024-07-24 20:24:29.860463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.135 [2024-07-24 20:24:29.860624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.135 [2024-07-24 20:24:29.860657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.135 [2024-07-24 20:24:29.860677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.135 [2024-07-24 20:24:29.860695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.135 [2024-07-24 20:24:29.860735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.135 qpair failed and we were unable to recover it. 00:30:26.135 [2024-07-24 20:24:29.870512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.135 [2024-07-24 20:24:29.870660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.135 [2024-07-24 20:24:29.870694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.135 [2024-07-24 20:24:29.870713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.135 [2024-07-24 20:24:29.870732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.135 [2024-07-24 20:24:29.870771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.135 qpair failed and we were unable to recover it. 00:30:26.135 [2024-07-24 20:24:29.880517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.135 [2024-07-24 20:24:29.880667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.135 [2024-07-24 20:24:29.880701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.135 [2024-07-24 20:24:29.880720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.135 [2024-07-24 20:24:29.880739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.135 [2024-07-24 20:24:29.880779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.135 qpair failed and we were unable to recover it. 00:30:26.135 [2024-07-24 20:24:29.890543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.135 [2024-07-24 20:24:29.890688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.135 [2024-07-24 20:24:29.890722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.135 [2024-07-24 20:24:29.890748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.135 [2024-07-24 20:24:29.890767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.135 [2024-07-24 20:24:29.890807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.135 qpair failed and we were unable to recover it. 00:30:26.135 [2024-07-24 20:24:29.900566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.135 [2024-07-24 20:24:29.900739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.135 [2024-07-24 20:24:29.900773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.135 [2024-07-24 20:24:29.900793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.135 [2024-07-24 20:24:29.900811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.135 [2024-07-24 20:24:29.900850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.135 qpair failed and we were unable to recover it. 00:30:26.135 [2024-07-24 20:24:29.910608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.135 [2024-07-24 20:24:29.910757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.135 [2024-07-24 20:24:29.910792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.135 [2024-07-24 20:24:29.910812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.135 [2024-07-24 20:24:29.910830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.135 [2024-07-24 20:24:29.910871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.135 qpair failed and we were unable to recover it. 00:30:26.395 [2024-07-24 20:24:29.920625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.395 [2024-07-24 20:24:29.920769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.395 [2024-07-24 20:24:29.920803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.395 [2024-07-24 20:24:29.920823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.395 [2024-07-24 20:24:29.920841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.395 [2024-07-24 20:24:29.920880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.395 qpair failed and we were unable to recover it. 00:30:26.395 [2024-07-24 20:24:29.930710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.395 [2024-07-24 20:24:29.930872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.395 [2024-07-24 20:24:29.930905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.395 [2024-07-24 20:24:29.930926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.395 [2024-07-24 20:24:29.930944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.395 [2024-07-24 20:24:29.930985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.395 qpair failed and we were unable to recover it. 00:30:26.395 [2024-07-24 20:24:29.940666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.395 [2024-07-24 20:24:29.940820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.395 [2024-07-24 20:24:29.940853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.395 [2024-07-24 20:24:29.940873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.395 [2024-07-24 20:24:29.940890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.395 [2024-07-24 20:24:29.940929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.395 qpair failed and we were unable to recover it. 00:30:26.395 [2024-07-24 20:24:29.950835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.395 [2024-07-24 20:24:29.951015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.395 [2024-07-24 20:24:29.951049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.395 [2024-07-24 20:24:29.951068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.395 [2024-07-24 20:24:29.951086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.395 [2024-07-24 20:24:29.951125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.395 qpair failed and we were unable to recover it. 00:30:26.395 [2024-07-24 20:24:29.960789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.395 [2024-07-24 20:24:29.960932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.395 [2024-07-24 20:24:29.960966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.395 [2024-07-24 20:24:29.960985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.395 [2024-07-24 20:24:29.961003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.395 [2024-07-24 20:24:29.961042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.395 qpair failed and we were unable to recover it. 00:30:26.395 [2024-07-24 20:24:29.970800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.395 [2024-07-24 20:24:29.970944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.395 [2024-07-24 20:24:29.970979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.395 [2024-07-24 20:24:29.970999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.395 [2024-07-24 20:24:29.971017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.395 [2024-07-24 20:24:29.971056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.395 qpair failed and we were unable to recover it. 00:30:26.395 [2024-07-24 20:24:29.980810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.395 [2024-07-24 20:24:29.980942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.395 [2024-07-24 20:24:29.980983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.395 [2024-07-24 20:24:29.981004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.395 [2024-07-24 20:24:29.981022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.395 [2024-07-24 20:24:29.981061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.395 qpair failed and we were unable to recover it. 00:30:26.395 [2024-07-24 20:24:29.990882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.395 [2024-07-24 20:24:29.991059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.395 [2024-07-24 20:24:29.991094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.395 [2024-07-24 20:24:29.991113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.395 [2024-07-24 20:24:29.991131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.395 [2024-07-24 20:24:29.991171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.395 qpair failed and we were unable to recover it. 00:30:26.395 [2024-07-24 20:24:30.000871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.395 [2024-07-24 20:24:30.001016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.396 [2024-07-24 20:24:30.001050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.396 [2024-07-24 20:24:30.001070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.396 [2024-07-24 20:24:30.001087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.396 [2024-07-24 20:24:30.001127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.396 qpair failed and we were unable to recover it. 00:30:26.396 [2024-07-24 20:24:30.011002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.396 [2024-07-24 20:24:30.011201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.396 [2024-07-24 20:24:30.011240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.396 [2024-07-24 20:24:30.011262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.396 [2024-07-24 20:24:30.011279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.396 [2024-07-24 20:24:30.011322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.396 qpair failed and we were unable to recover it. 00:30:26.396 [2024-07-24 20:24:30.020952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.396 [2024-07-24 20:24:30.021123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.396 [2024-07-24 20:24:30.021158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.396 [2024-07-24 20:24:30.021178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.396 [2024-07-24 20:24:30.021194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.396 [2024-07-24 20:24:30.021242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.396 qpair failed and we were unable to recover it. 00:30:26.396 [2024-07-24 20:24:30.030978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.396 [2024-07-24 20:24:30.031140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.396 [2024-07-24 20:24:30.031175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.396 [2024-07-24 20:24:30.031195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.396 [2024-07-24 20:24:30.031212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.396 [2024-07-24 20:24:30.031253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.396 qpair failed and we were unable to recover it. 00:30:26.396 [2024-07-24 20:24:30.041010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.396 [2024-07-24 20:24:30.041208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.396 [2024-07-24 20:24:30.041244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.396 [2024-07-24 20:24:30.041263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.396 [2024-07-24 20:24:30.041281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.396 [2024-07-24 20:24:30.041320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.396 qpair failed and we were unable to recover it. 00:30:26.396 [2024-07-24 20:24:30.051051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.396 [2024-07-24 20:24:30.051187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.396 [2024-07-24 20:24:30.051222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.396 [2024-07-24 20:24:30.051242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.396 [2024-07-24 20:24:30.051260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.396 [2024-07-24 20:24:30.051301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.396 qpair failed and we were unable to recover it. 00:30:26.396 [2024-07-24 20:24:30.061035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.396 [2024-07-24 20:24:30.061176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.396 [2024-07-24 20:24:30.061211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.396 [2024-07-24 20:24:30.061230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.396 [2024-07-24 20:24:30.061249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.396 [2024-07-24 20:24:30.061288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.396 qpair failed and we were unable to recover it. 00:30:26.396 [2024-07-24 20:24:30.071105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.396 [2024-07-24 20:24:30.071267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.396 [2024-07-24 20:24:30.071313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.396 [2024-07-24 20:24:30.071334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.396 [2024-07-24 20:24:30.071352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.396 [2024-07-24 20:24:30.071393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.396 qpair failed and we were unable to recover it. 00:30:26.396 [2024-07-24 20:24:30.081120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.396 [2024-07-24 20:24:30.081274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.396 [2024-07-24 20:24:30.081309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.396 [2024-07-24 20:24:30.081329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.396 [2024-07-24 20:24:30.081348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.396 [2024-07-24 20:24:30.081387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.396 qpair failed and we were unable to recover it. 00:30:26.396 [2024-07-24 20:24:30.091143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.396 [2024-07-24 20:24:30.091279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.396 [2024-07-24 20:24:30.091315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.396 [2024-07-24 20:24:30.091334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.396 [2024-07-24 20:24:30.091353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.396 [2024-07-24 20:24:30.091393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.396 qpair failed and we were unable to recover it. 00:30:26.396 [2024-07-24 20:24:30.101158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.396 [2024-07-24 20:24:30.101296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.396 [2024-07-24 20:24:30.101330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.396 [2024-07-24 20:24:30.101350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.396 [2024-07-24 20:24:30.101368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.396 [2024-07-24 20:24:30.101408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.396 qpair failed and we were unable to recover it. 00:30:26.396 [2024-07-24 20:24:30.111219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.396 [2024-07-24 20:24:30.111368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.396 [2024-07-24 20:24:30.111402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.396 [2024-07-24 20:24:30.111422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.396 [2024-07-24 20:24:30.111457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.396 [2024-07-24 20:24:30.111500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.396 qpair failed and we were unable to recover it. 00:30:26.396 [2024-07-24 20:24:30.121287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.396 [2024-07-24 20:24:30.121443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.396 [2024-07-24 20:24:30.121478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.396 [2024-07-24 20:24:30.121497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.396 [2024-07-24 20:24:30.121515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.396 [2024-07-24 20:24:30.121556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.396 qpair failed and we were unable to recover it. 00:30:26.396 [2024-07-24 20:24:30.131317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.396 [2024-07-24 20:24:30.131471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.396 [2024-07-24 20:24:30.131506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.397 [2024-07-24 20:24:30.131526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.397 [2024-07-24 20:24:30.131542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.397 [2024-07-24 20:24:30.131583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.397 qpair failed and we were unable to recover it. 00:30:26.397 [2024-07-24 20:24:30.141295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.397 [2024-07-24 20:24:30.141465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.397 [2024-07-24 20:24:30.141500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.397 [2024-07-24 20:24:30.141520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.397 [2024-07-24 20:24:30.141539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.397 [2024-07-24 20:24:30.141580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.397 qpair failed and we were unable to recover it. 00:30:26.397 [2024-07-24 20:24:30.151310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.397 [2024-07-24 20:24:30.151464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.397 [2024-07-24 20:24:30.151497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.397 [2024-07-24 20:24:30.151517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.397 [2024-07-24 20:24:30.151533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.397 [2024-07-24 20:24:30.151574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.397 qpair failed and we were unable to recover it. 00:30:26.397 [2024-07-24 20:24:30.161339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.397 [2024-07-24 20:24:30.161502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.397 [2024-07-24 20:24:30.161536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.397 [2024-07-24 20:24:30.161555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.397 [2024-07-24 20:24:30.161573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.397 [2024-07-24 20:24:30.161613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.397 qpair failed and we were unable to recover it. 00:30:26.397 [2024-07-24 20:24:30.171387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.397 [2024-07-24 20:24:30.171587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.397 [2024-07-24 20:24:30.171622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.397 [2024-07-24 20:24:30.171641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.397 [2024-07-24 20:24:30.171660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.397 [2024-07-24 20:24:30.171699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.397 qpair failed and we were unable to recover it. 00:30:26.656 [2024-07-24 20:24:30.181379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.656 [2024-07-24 20:24:30.181528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.656 [2024-07-24 20:24:30.181563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.656 [2024-07-24 20:24:30.181582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.656 [2024-07-24 20:24:30.181601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.656 [2024-07-24 20:24:30.181640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.656 qpair failed and we were unable to recover it. 00:30:26.656 [2024-07-24 20:24:30.191452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.656 [2024-07-24 20:24:30.191614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.656 [2024-07-24 20:24:30.191649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.656 [2024-07-24 20:24:30.191668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.656 [2024-07-24 20:24:30.191687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.656 [2024-07-24 20:24:30.191726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.656 qpair failed and we were unable to recover it. 00:30:26.656 [2024-07-24 20:24:30.201441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.656 [2024-07-24 20:24:30.201594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.656 [2024-07-24 20:24:30.201629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.656 [2024-07-24 20:24:30.201657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.656 [2024-07-24 20:24:30.201676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.657 [2024-07-24 20:24:30.201717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.657 qpair failed and we were unable to recover it. 00:30:26.657 [2024-07-24 20:24:30.211539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.657 [2024-07-24 20:24:30.211685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.657 [2024-07-24 20:24:30.211719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.657 [2024-07-24 20:24:30.211738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.657 [2024-07-24 20:24:30.211757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.657 [2024-07-24 20:24:30.211797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.657 qpair failed and we were unable to recover it. 00:30:26.657 [2024-07-24 20:24:30.221556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.657 [2024-07-24 20:24:30.221699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.657 [2024-07-24 20:24:30.221732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.657 [2024-07-24 20:24:30.221752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.657 [2024-07-24 20:24:30.221770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.657 [2024-07-24 20:24:30.221809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.657 qpair failed and we were unable to recover it. 00:30:26.657 [2024-07-24 20:24:30.231554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.657 [2024-07-24 20:24:30.231703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.657 [2024-07-24 20:24:30.231738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.657 [2024-07-24 20:24:30.231758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.657 [2024-07-24 20:24:30.231779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.657 [2024-07-24 20:24:30.231820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.657 qpair failed and we were unable to recover it. 00:30:26.657 [2024-07-24 20:24:30.241586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.657 [2024-07-24 20:24:30.241730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.657 [2024-07-24 20:24:30.241763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.657 [2024-07-24 20:24:30.241784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.657 [2024-07-24 20:24:30.241801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.657 [2024-07-24 20:24:30.241841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.657 qpair failed and we were unable to recover it. 00:30:26.657 [2024-07-24 20:24:30.251615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.657 [2024-07-24 20:24:30.251759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.657 [2024-07-24 20:24:30.251793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.657 [2024-07-24 20:24:30.251813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.657 [2024-07-24 20:24:30.251831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.657 [2024-07-24 20:24:30.251871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.657 qpair failed and we were unable to recover it. 00:30:26.657 [2024-07-24 20:24:30.261681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.657 [2024-07-24 20:24:30.261823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.657 [2024-07-24 20:24:30.261857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.657 [2024-07-24 20:24:30.261876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.657 [2024-07-24 20:24:30.261894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.657 [2024-07-24 20:24:30.261933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.657 qpair failed and we were unable to recover it. 00:30:26.657 [2024-07-24 20:24:30.271671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.657 [2024-07-24 20:24:30.271815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.657 [2024-07-24 20:24:30.271848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.657 [2024-07-24 20:24:30.271869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.657 [2024-07-24 20:24:30.271887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.657 [2024-07-24 20:24:30.271926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.657 qpair failed and we were unable to recover it. 00:30:26.657 [2024-07-24 20:24:30.281797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.657 [2024-07-24 20:24:30.281978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.657 [2024-07-24 20:24:30.282013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.657 [2024-07-24 20:24:30.282033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.657 [2024-07-24 20:24:30.282051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.657 [2024-07-24 20:24:30.282090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.657 qpair failed and we were unable to recover it. 00:30:26.657 [2024-07-24 20:24:30.291731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.657 [2024-07-24 20:24:30.291867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.657 [2024-07-24 20:24:30.291900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.657 [2024-07-24 20:24:30.291926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.657 [2024-07-24 20:24:30.291943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.657 [2024-07-24 20:24:30.291981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.657 qpair failed and we were unable to recover it. 00:30:26.657 [2024-07-24 20:24:30.301745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.657 [2024-07-24 20:24:30.301887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.657 [2024-07-24 20:24:30.301923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.657 [2024-07-24 20:24:30.301943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.657 [2024-07-24 20:24:30.301961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.657 [2024-07-24 20:24:30.302000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.657 qpair failed and we were unable to recover it. 00:30:26.657 [2024-07-24 20:24:30.311841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.657 [2024-07-24 20:24:30.311986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.657 [2024-07-24 20:24:30.312020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.657 [2024-07-24 20:24:30.312040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.657 [2024-07-24 20:24:30.312058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.657 [2024-07-24 20:24:30.312097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.657 qpair failed and we were unable to recover it. 00:30:26.657 [2024-07-24 20:24:30.321832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.657 [2024-07-24 20:24:30.321970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.657 [2024-07-24 20:24:30.322004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.657 [2024-07-24 20:24:30.322024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.657 [2024-07-24 20:24:30.322042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.657 [2024-07-24 20:24:30.322082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.657 qpair failed and we were unable to recover it. 00:30:26.657 [2024-07-24 20:24:30.331938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.657 [2024-07-24 20:24:30.332078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.658 [2024-07-24 20:24:30.332112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.658 [2024-07-24 20:24:30.332132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.658 [2024-07-24 20:24:30.332150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.658 [2024-07-24 20:24:30.332189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.658 qpair failed and we were unable to recover it. 00:30:26.658 [2024-07-24 20:24:30.341871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.658 [2024-07-24 20:24:30.342006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.658 [2024-07-24 20:24:30.342040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.658 [2024-07-24 20:24:30.342060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.658 [2024-07-24 20:24:30.342077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.658 [2024-07-24 20:24:30.342117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.658 qpair failed and we were unable to recover it. 00:30:26.658 [2024-07-24 20:24:30.351963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.658 [2024-07-24 20:24:30.352120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.658 [2024-07-24 20:24:30.352154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.658 [2024-07-24 20:24:30.352173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.658 [2024-07-24 20:24:30.352192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.658 [2024-07-24 20:24:30.352231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.658 qpair failed and we were unable to recover it. 00:30:26.658 [2024-07-24 20:24:30.361937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.658 [2024-07-24 20:24:30.362085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.658 [2024-07-24 20:24:30.362120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.658 [2024-07-24 20:24:30.362140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.658 [2024-07-24 20:24:30.362158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.658 [2024-07-24 20:24:30.362198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.658 qpair failed and we were unable to recover it. 00:30:26.658 [2024-07-24 20:24:30.371974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.658 [2024-07-24 20:24:30.372139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.658 [2024-07-24 20:24:30.372172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.658 [2024-07-24 20:24:30.372192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.658 [2024-07-24 20:24:30.372210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.658 [2024-07-24 20:24:30.372249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.658 qpair failed and we were unable to recover it. 00:30:26.658 [2024-07-24 20:24:30.381977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.658 [2024-07-24 20:24:30.382134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.658 [2024-07-24 20:24:30.382174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.658 [2024-07-24 20:24:30.382196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.658 [2024-07-24 20:24:30.382213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.658 [2024-07-24 20:24:30.382253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.658 qpair failed and we were unable to recover it. 00:30:26.658 [2024-07-24 20:24:30.392128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.658 [2024-07-24 20:24:30.392273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.658 [2024-07-24 20:24:30.392306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.658 [2024-07-24 20:24:30.392326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.658 [2024-07-24 20:24:30.392344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.658 [2024-07-24 20:24:30.392384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.658 qpair failed and we were unable to recover it. 00:30:26.658 [2024-07-24 20:24:30.402222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.658 [2024-07-24 20:24:30.402384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.658 [2024-07-24 20:24:30.402417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.658 [2024-07-24 20:24:30.402446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.658 [2024-07-24 20:24:30.402465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.658 [2024-07-24 20:24:30.402506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.658 qpair failed and we were unable to recover it. 00:30:26.658 [2024-07-24 20:24:30.412148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.658 [2024-07-24 20:24:30.412316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.658 [2024-07-24 20:24:30.412349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.658 [2024-07-24 20:24:30.412369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.658 [2024-07-24 20:24:30.412388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe954000b90 00:30:26.658 [2024-07-24 20:24:30.412435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.658 qpair failed and we were unable to recover it. 00:30:26.658 [2024-07-24 20:24:30.422146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.658 [2024-07-24 20:24:30.422315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.658 [2024-07-24 20:24:30.422358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.658 [2024-07-24 20:24:30.422381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.658 [2024-07-24 20:24:30.422398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x578ea0 00:30:26.658 [2024-07-24 20:24:30.422454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.658 qpair failed and we were unable to recover it. 00:30:26.658 [2024-07-24 20:24:30.432135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.658 [2024-07-24 20:24:30.432310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.658 [2024-07-24 20:24:30.432346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.658 [2024-07-24 20:24:30.432366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.658 [2024-07-24 20:24:30.432383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x578ea0 00:30:26.658 [2024-07-24 20:24:30.432422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.658 qpair failed and we were unable to recover it. 00:30:26.916 [2024-07-24 20:24:30.442245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.916 [2024-07-24 20:24:30.442387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.916 [2024-07-24 20:24:30.442457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.916 [2024-07-24 20:24:30.442484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.916 [2024-07-24 20:24:30.442502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe94c000b90 00:30:26.916 [2024-07-24 20:24:30.442546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.916 qpair failed and we were unable to recover it. 00:30:26.916 [2024-07-24 20:24:30.452344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.916 [2024-07-24 20:24:30.452509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.916 [2024-07-24 20:24:30.452546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.916 [2024-07-24 20:24:30.452567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.916 [2024-07-24 20:24:30.452583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe94c000b90 00:30:26.916 [2024-07-24 20:24:30.452624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.916 qpair failed and we were unable to recover it. 00:30:26.916 [2024-07-24 20:24:30.462257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.916 [2024-07-24 20:24:30.462402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.916 [2024-07-24 20:24:30.462467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.916 [2024-07-24 20:24:30.462491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.917 [2024-07-24 20:24:30.462510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe95c000b90 00:30:26.917 [2024-07-24 20:24:30.462551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.917 qpair failed and we were unable to recover it. 00:30:26.917 [2024-07-24 20:24:30.462767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575b00 is same with the state(5) to be set 00:30:26.917 [2024-07-24 20:24:30.472304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.917 [2024-07-24 20:24:30.472472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.917 [2024-07-24 20:24:30.472510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.917 [2024-07-24 20:24:30.472530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.917 [2024-07-24 20:24:30.472547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe95c000b90 00:30:26.917 [2024-07-24 20:24:30.472588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.917 qpair failed and we were unable to recover it. 00:30:26.917 [2024-07-24 20:24:30.472924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x575b00 (9): Bad file descriptor 00:30:26.917 Initializing NVMe Controllers 00:30:26.917 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:26.917 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:26.917 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:26.917 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:26.917 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:26.917 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:26.917 Initialization complete. Launching workers. 00:30:26.917 Starting thread on core 1 00:30:26.917 Starting thread on core 2 00:30:26.917 Starting thread on core 3 00:30:26.917 Starting thread on core 0 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:26.917 00:30:26.917 real 0m11.244s 00:30:26.917 user 0m18.828s 00:30:26.917 sys 0m5.694s 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.917 ************************************ 00:30:26.917 END TEST nvmf_target_disconnect_tc2 00:30:26.917 ************************************ 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:26.917 rmmod nvme_tcp 00:30:26.917 rmmod nvme_fabrics 00:30:26.917 rmmod nvme_keyring 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2173511 ']' 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2173511 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2173511 ']' 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 2173511 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2173511 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2173511' 00:30:26.917 killing process with pid 2173511 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 2173511 00:30:26.917 20:24:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 2173511 00:30:27.483 20:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:27.483 20:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:27.483 20:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:27.483 20:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:27.483 20:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:27.483 20:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.483 20:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.483 20:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.421 20:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:29.421 00:30:29.421 real 0m16.893s 00:30:29.421 user 0m45.752s 00:30:29.421 sys 0m8.249s 00:30:29.421 20:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:29.421 20:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:29.421 ************************************ 00:30:29.421 END TEST nvmf_target_disconnect 00:30:29.421 ************************************ 00:30:29.421 20:24:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:29.421 00:30:29.421 real 6m10.132s 00:30:29.421 user 12m52.541s 00:30:29.421 sys 1m31.768s 00:30:29.421 20:24:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:29.421 20:24:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.421 ************************************ 00:30:29.421 END TEST nvmf_host 00:30:29.421 ************************************ 00:30:29.421 00:30:29.421 real 23m23.686s 00:30:29.421 user 54m47.416s 00:30:29.421 sys 5m55.747s 00:30:29.421 20:24:33 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:29.421 20:24:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:29.421 ************************************ 00:30:29.421 END TEST nvmf_tcp 00:30:29.421 ************************************ 00:30:29.421 20:24:33 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:30:29.421 20:24:33 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:29.421 20:24:33 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:29.421 20:24:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:29.421 20:24:33 -- common/autotest_common.sh@10 -- # set +x 00:30:29.680 ************************************ 00:30:29.680 START TEST spdkcli_nvmf_tcp 00:30:29.680 ************************************ 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:29.680 * Looking for test storage... 00:30:29.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2174715 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2174715 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 2174715 ']' 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:29.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:29.680 20:24:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:29.680 [2024-07-24 20:24:33.395840] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:30:29.680 [2024-07-24 20:24:33.395960] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2174715 ] 00:30:29.680 EAL: No free 2048 kB hugepages reported on node 1 00:30:29.939 [2024-07-24 20:24:33.480501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:29.939 [2024-07-24 20:24:33.626813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:29.939 [2024-07-24 20:24:33.626820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.197 20:24:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:30.197 20:24:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:30:30.197 20:24:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:30.197 20:24:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:30.197 20:24:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:30.197 20:24:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:30.197 20:24:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:30.197 20:24:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:30.197 20:24:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:30.197 20:24:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:30.197 20:24:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:30.197 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:30.197 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:30.197 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:30.197 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:30.197 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:30.197 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:30.197 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:30.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:30.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:30.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:30.197 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:30.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:30.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:30.197 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:30.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:30.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:30.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:30.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:30.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:30.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:30.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:30.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:30.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:30.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:30.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:30.198 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:30.198 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:30.198 ' 00:30:33.481 [2024-07-24 20:24:36.755108] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:34.415 [2024-07-24 20:24:38.024061] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:36.942 [2024-07-24 20:24:40.367838] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:38.839 [2024-07-24 20:24:42.390447] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:40.237 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:40.237 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:40.237 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:40.237 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:40.237 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:40.237 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:40.237 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:40.237 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:40.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:40.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:40.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:40.237 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:40.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:40.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:40.237 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:40.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:40.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:40.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:40.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:40.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:40.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:40.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:40.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:40.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:40.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:40.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:40.237 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:40.237 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:40.495 20:24:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:40.495 20:24:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:40.495 20:24:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:40.495 20:24:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:40.495 20:24:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:40.495 20:24:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:40.495 20:24:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:40.495 20:24:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:41.062 20:24:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:41.062 20:24:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:41.062 20:24:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:41.062 20:24:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:41.062 20:24:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:41.062 20:24:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:41.062 20:24:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:41.062 20:24:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:41.062 20:24:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:41.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:41.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:41.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:41.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:41.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:41.062 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:41.062 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:41.062 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:41.062 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:41.062 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:41.062 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:41.062 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:41.062 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:41.062 ' 00:30:46.329 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:46.329 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:46.329 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:46.329 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:46.329 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:46.329 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:46.329 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:46.329 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:46.329 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:46.329 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:46.329 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:46.329 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:46.329 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:46.329 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:46.587 20:24:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:46.587 20:24:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:46.587 20:24:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:46.587 20:24:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2174715 00:30:46.587 20:24:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2174715 ']' 00:30:46.587 20:24:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2174715 00:30:46.587 20:24:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:30:46.587 20:24:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:46.587 20:24:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2174715 00:30:46.587 20:24:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:46.587 20:24:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:46.587 20:24:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2174715' 00:30:46.587 killing process with pid 2174715 00:30:46.587 20:24:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 2174715 00:30:46.587 20:24:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 2174715 00:30:46.846 20:24:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:46.846 20:24:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:46.846 20:24:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2174715 ']' 00:30:46.846 20:24:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2174715 00:30:46.846 20:24:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2174715 ']' 00:30:46.846 20:24:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2174715 00:30:46.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2174715) - No such process 00:30:46.846 20:24:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 2174715 is not found' 00:30:46.846 Process with pid 2174715 is not found 00:30:46.846 20:24:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:46.846 20:24:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:46.846 20:24:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:46.846 00:30:46.846 real 0m17.390s 00:30:46.846 user 0m37.486s 00:30:46.846 sys 0m1.125s 00:30:46.846 20:24:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:46.846 20:24:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:46.846 ************************************ 00:30:46.846 END TEST spdkcli_nvmf_tcp 00:30:46.846 ************************************ 00:30:47.105 20:24:50 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:47.105 20:24:50 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:47.105 20:24:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:47.105 20:24:50 -- common/autotest_common.sh@10 -- # set +x 00:30:47.105 ************************************ 00:30:47.105 START TEST nvmf_identify_passthru 00:30:47.105 ************************************ 00:30:47.105 20:24:50 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:47.105 * Looking for test storage... 00:30:47.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:47.105 20:24:50 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:47.105 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:47.105 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:47.105 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:47.105 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:47.105 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:47.105 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:47.105 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:47.105 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:47.105 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:47.105 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:47.105 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:47.105 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:47.105 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:47.105 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:47.105 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:47.105 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:47.105 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:47.105 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:47.105 20:24:50 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:47.106 20:24:50 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:47.106 20:24:50 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:47.106 20:24:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.106 20:24:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.106 20:24:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.106 20:24:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:47.106 20:24:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.106 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:30:47.106 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:47.106 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:47.106 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:47.106 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:47.106 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:47.106 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:47.106 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:47.106 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:47.106 20:24:50 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:47.106 20:24:50 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:47.106 20:24:50 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:47.106 20:24:50 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:47.106 20:24:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.106 20:24:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.106 20:24:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.106 20:24:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:47.106 20:24:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.106 20:24:50 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:47.106 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:47.106 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:47.106 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:47.106 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:47.106 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:47.106 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.106 20:24:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:47.106 20:24:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.106 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:47.106 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:47.106 20:24:50 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:30:47.106 20:24:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:49.639 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:49.639 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:49.639 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:49.639 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:49.639 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:49.639 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:49.639 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:49.639 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:49.639 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:49.639 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:49.639 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:49.640 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:49.640 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:49.640 Found net devices under 0000:84:00.0: cvl_0_0 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:49.640 Found net devices under 0000:84:00.1: cvl_0_1 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:49.640 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:49.898 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:49.898 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:49.898 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:49.898 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:49.899 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:49.899 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:49.899 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:49.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:49.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:30:49.899 00:30:49.899 --- 10.0.0.2 ping statistics --- 00:30:49.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.899 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:30:49.899 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:49.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:49.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:30:49.899 00:30:49.899 --- 10.0.0.1 ping statistics --- 00:30:49.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.899 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:30:49.899 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:49.899 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:49.899 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:49.899 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:49.899 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:49.899 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:49.899 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:49.899 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:49.899 20:24:53 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:49.899 20:24:53 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:49.899 20:24:53 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:49.899 20:24:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:49.899 20:24:53 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:49.899 20:24:53 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:30:49.899 20:24:53 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:30:49.899 20:24:53 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:30:49.899 20:24:53 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:30:49.899 20:24:53 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:49.899 20:24:53 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:30:49.899 20:24:53 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:49.899 20:24:53 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:49.899 20:24:53 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:49.899 20:24:53 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:49.899 20:24:53 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:30:49.899 20:24:53 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:82:00.0 00:30:49.899 20:24:53 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:30:49.899 20:24:53 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:30:49.899 20:24:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:30:49.899 20:24:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:49.899 20:24:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:50.157 EAL: No free 2048 kB hugepages reported on node 1 00:30:54.373 20:24:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:30:54.373 20:24:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:30:54.373 20:24:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:54.373 20:24:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:54.373 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.565 20:25:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:58.565 20:25:02 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:58.565 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:58.565 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:58.824 20:25:02 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:58.824 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:58.824 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:58.824 20:25:02 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2179472 00:30:58.824 20:25:02 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:58.824 20:25:02 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:58.824 20:25:02 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2179472 00:30:58.824 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 2179472 ']' 00:30:58.824 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.824 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:58.824 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.824 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:58.824 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:58.824 [2024-07-24 20:25:02.463845] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:30:58.824 [2024-07-24 20:25:02.463987] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:58.824 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.824 [2024-07-24 20:25:02.570139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:59.082 [2024-07-24 20:25:02.710647] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:59.082 [2024-07-24 20:25:02.710724] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:59.082 [2024-07-24 20:25:02.710744] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:59.082 [2024-07-24 20:25:02.710762] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:59.082 [2024-07-24 20:25:02.710776] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:59.083 [2024-07-24 20:25:02.714458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.083 [2024-07-24 20:25:02.714500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:59.083 [2024-07-24 20:25:02.714534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:59.083 [2024-07-24 20:25:02.714539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.083 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:59.083 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:30:59.083 20:25:02 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:59.083 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.083 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:59.083 INFO: Log level set to 20 00:30:59.083 INFO: Requests: 00:30:59.083 { 00:30:59.083 "jsonrpc": "2.0", 00:30:59.083 "method": "nvmf_set_config", 00:30:59.083 "id": 1, 00:30:59.083 "params": { 00:30:59.083 "admin_cmd_passthru": { 00:30:59.083 "identify_ctrlr": true 00:30:59.083 } 00:30:59.083 } 00:30:59.083 } 00:30:59.083 00:30:59.083 INFO: response: 00:30:59.083 { 00:30:59.083 "jsonrpc": "2.0", 00:30:59.083 "id": 1, 00:30:59.083 "result": true 00:30:59.083 } 00:30:59.083 00:30:59.083 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.083 20:25:02 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:59.083 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.083 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:59.083 INFO: Setting log level to 20 00:30:59.083 INFO: Setting log level to 20 00:30:59.083 INFO: Log level set to 20 00:30:59.083 INFO: Log level set to 20 00:30:59.083 INFO: Requests: 00:30:59.083 { 00:30:59.083 "jsonrpc": "2.0", 00:30:59.083 "method": "framework_start_init", 00:30:59.083 "id": 1 00:30:59.083 } 00:30:59.083 00:30:59.083 INFO: Requests: 00:30:59.083 { 00:30:59.083 "jsonrpc": "2.0", 00:30:59.083 "method": "framework_start_init", 00:30:59.083 "id": 1 00:30:59.083 } 00:30:59.083 00:30:59.341 [2024-07-24 20:25:02.918024] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:59.341 INFO: response: 00:30:59.341 { 00:30:59.341 "jsonrpc": "2.0", 00:30:59.341 "id": 1, 00:30:59.341 "result": true 00:30:59.341 } 00:30:59.341 00:30:59.341 INFO: response: 00:30:59.341 { 00:30:59.341 "jsonrpc": "2.0", 00:30:59.341 "id": 1, 00:30:59.341 "result": true 00:30:59.341 } 00:30:59.341 00:30:59.341 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.341 20:25:02 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:59.341 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.341 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:59.341 INFO: Setting log level to 40 00:30:59.341 INFO: Setting log level to 40 00:30:59.341 INFO: Setting log level to 40 00:30:59.341 [2024-07-24 20:25:02.928689] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:59.341 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.341 20:25:02 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:59.341 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:59.341 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:59.341 20:25:02 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:30:59.341 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.341 20:25:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.624 Nvme0n1 00:31:02.624 20:25:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.624 20:25:05 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:02.624 20:25:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.624 20:25:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.624 20:25:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.624 20:25:05 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:02.624 20:25:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.624 20:25:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.624 20:25:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.624 20:25:05 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:02.624 20:25:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.624 20:25:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.624 [2024-07-24 20:25:05.846010] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.624 20:25:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.624 20:25:05 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:02.624 20:25:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.624 20:25:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.624 [ 00:31:02.624 { 00:31:02.624 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:02.624 "subtype": "Discovery", 00:31:02.624 "listen_addresses": [], 00:31:02.624 "allow_any_host": true, 00:31:02.624 "hosts": [] 00:31:02.624 }, 00:31:02.624 { 00:31:02.624 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:02.624 "subtype": "NVMe", 00:31:02.624 "listen_addresses": [ 00:31:02.624 { 00:31:02.624 "trtype": "TCP", 00:31:02.624 "adrfam": "IPv4", 00:31:02.624 "traddr": "10.0.0.2", 00:31:02.624 "trsvcid": "4420" 00:31:02.624 } 00:31:02.624 ], 00:31:02.624 "allow_any_host": true, 00:31:02.624 "hosts": [], 00:31:02.624 "serial_number": "SPDK00000000000001", 00:31:02.624 "model_number": "SPDK bdev Controller", 00:31:02.624 "max_namespaces": 1, 00:31:02.624 "min_cntlid": 1, 00:31:02.624 "max_cntlid": 65519, 00:31:02.624 "namespaces": [ 00:31:02.624 { 00:31:02.624 "nsid": 1, 00:31:02.624 "bdev_name": "Nvme0n1", 00:31:02.624 "name": "Nvme0n1", 00:31:02.624 "nguid": "63BDC488A1E4420CBE6688A3A38C362C", 00:31:02.624 "uuid": "63bdc488-a1e4-420c-be66-88a3a38c362c" 00:31:02.624 } 00:31:02.624 ] 00:31:02.624 } 00:31:02.624 ] 00:31:02.624 20:25:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.624 20:25:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:02.624 20:25:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:02.624 20:25:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:02.624 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.624 20:25:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:31:02.624 20:25:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:02.624 20:25:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:02.624 20:25:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:02.624 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.624 20:25:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:31:02.624 20:25:06 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:31:02.624 20:25:06 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:31:02.624 20:25:06 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:02.624 20:25:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.624 20:25:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.624 20:25:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.624 20:25:06 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:02.624 20:25:06 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:02.624 20:25:06 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:02.624 20:25:06 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:31:02.624 20:25:06 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:02.624 20:25:06 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:31:02.624 20:25:06 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:02.624 20:25:06 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:02.624 rmmod nvme_tcp 00:31:02.624 rmmod nvme_fabrics 00:31:02.624 rmmod nvme_keyring 00:31:02.624 20:25:06 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:02.624 20:25:06 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:31:02.624 20:25:06 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:31:02.624 20:25:06 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2179472 ']' 00:31:02.624 20:25:06 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2179472 00:31:02.624 20:25:06 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 2179472 ']' 00:31:02.624 20:25:06 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 2179472 00:31:02.625 20:25:06 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:31:02.625 20:25:06 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:02.625 20:25:06 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2179472 00:31:02.883 20:25:06 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:02.883 20:25:06 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:02.883 20:25:06 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2179472' 00:31:02.883 killing process with pid 2179472 00:31:02.883 20:25:06 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 2179472 00:31:02.883 20:25:06 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 2179472 00:31:04.787 20:25:08 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:04.787 20:25:08 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:04.787 20:25:08 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:04.787 20:25:08 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:04.787 20:25:08 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:04.787 20:25:08 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.787 20:25:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:04.787 20:25:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.695 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:06.695 00:31:06.695 real 0m19.500s 00:31:06.695 user 0m28.237s 00:31:06.695 sys 0m3.143s 00:31:06.695 20:25:10 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:06.695 20:25:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:06.695 ************************************ 00:31:06.695 END TEST nvmf_identify_passthru 00:31:06.695 ************************************ 00:31:06.695 20:25:10 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:06.695 20:25:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:06.695 20:25:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:06.695 20:25:10 -- common/autotest_common.sh@10 -- # set +x 00:31:06.695 ************************************ 00:31:06.695 START TEST nvmf_dif 00:31:06.695 ************************************ 00:31:06.695 20:25:10 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:06.695 * Looking for test storage... 00:31:06.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:06.695 20:25:10 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:06.695 20:25:10 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:06.695 20:25:10 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:06.695 20:25:10 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:06.695 20:25:10 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:06.695 20:25:10 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:06.695 20:25:10 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:06.695 20:25:10 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:06.695 20:25:10 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:06.695 20:25:10 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:06.695 20:25:10 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:06.695 20:25:10 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:06.695 20:25:10 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:06.695 20:25:10 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:06.695 20:25:10 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:06.695 20:25:10 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:06.695 20:25:10 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:06.695 20:25:10 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:06.695 20:25:10 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:06.695 20:25:10 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:06.695 20:25:10 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:06.695 20:25:10 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:06.696 20:25:10 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.696 20:25:10 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.696 20:25:10 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.696 20:25:10 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:06.696 20:25:10 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.696 20:25:10 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:31:06.696 20:25:10 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:06.696 20:25:10 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:06.696 20:25:10 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:06.696 20:25:10 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:06.696 20:25:10 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:06.696 20:25:10 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:06.696 20:25:10 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:06.696 20:25:10 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:06.696 20:25:10 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:06.696 20:25:10 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:06.696 20:25:10 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:06.696 20:25:10 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:06.696 20:25:10 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:06.696 20:25:10 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:06.696 20:25:10 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:06.696 20:25:10 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:06.696 20:25:10 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:06.696 20:25:10 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:06.696 20:25:10 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.696 20:25:10 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:06.696 20:25:10 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.696 20:25:10 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:06.696 20:25:10 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:06.696 20:25:10 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:31:06.696 20:25:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:09.234 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:09.234 20:25:12 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:09.235 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:09.235 Found net devices under 0000:84:00.0: cvl_0_0 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:09.235 Found net devices under 0000:84:00.1: cvl_0_1 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:09.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:09.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:31:09.235 00:31:09.235 --- 10.0.0.2 ping statistics --- 00:31:09.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:09.235 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:09.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:09.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:31:09.235 00:31:09.235 --- 10.0.0.1 ping statistics --- 00:31:09.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:09.235 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:09.235 20:25:12 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:10.611 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:10.611 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:10.611 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:10.611 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:10.611 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:10.611 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:10.611 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:10.611 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:10.611 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:10.611 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:10.611 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:10.611 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:10.611 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:10.611 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:10.611 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:10.611 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:10.872 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:10.872 20:25:14 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:10.872 20:25:14 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:10.872 20:25:14 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:10.872 20:25:14 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:10.872 20:25:14 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:10.872 20:25:14 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:10.872 20:25:14 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:10.872 20:25:14 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:10.872 20:25:14 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:10.872 20:25:14 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:10.872 20:25:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:10.872 20:25:14 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2182761 00:31:10.872 20:25:14 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:10.872 20:25:14 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2182761 00:31:10.872 20:25:14 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 2182761 ']' 00:31:10.872 20:25:14 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:10.872 20:25:14 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:10.872 20:25:14 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:10.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:10.872 20:25:14 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:10.872 20:25:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:11.133 [2024-07-24 20:25:14.697489] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:31:11.133 [2024-07-24 20:25:14.697669] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:11.133 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.133 [2024-07-24 20:25:14.837896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:11.393 [2024-07-24 20:25:15.035800] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:11.393 [2024-07-24 20:25:15.035909] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:11.393 [2024-07-24 20:25:15.035946] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:11.393 [2024-07-24 20:25:15.035977] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:11.393 [2024-07-24 20:25:15.036017] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:11.393 [2024-07-24 20:25:15.036082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.329 20:25:15 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:12.330 20:25:15 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:31:12.330 20:25:15 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:12.330 20:25:15 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:12.330 20:25:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:12.330 20:25:15 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:12.330 20:25:15 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:12.330 20:25:15 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:12.330 20:25:15 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.330 20:25:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:12.330 [2024-07-24 20:25:15.825479] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:12.330 20:25:15 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.330 20:25:15 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:12.330 20:25:15 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:12.330 20:25:15 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:12.330 20:25:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:12.330 ************************************ 00:31:12.330 START TEST fio_dif_1_default 00:31:12.330 ************************************ 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:12.330 bdev_null0 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:12.330 [2024-07-24 20:25:15.894444] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:12.330 { 00:31:12.330 "params": { 00:31:12.330 "name": "Nvme$subsystem", 00:31:12.330 "trtype": "$TEST_TRANSPORT", 00:31:12.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:12.330 "adrfam": "ipv4", 00:31:12.330 "trsvcid": "$NVMF_PORT", 00:31:12.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:12.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:12.330 "hdgst": ${hdgst:-false}, 00:31:12.330 "ddgst": ${ddgst:-false} 00:31:12.330 }, 00:31:12.330 "method": "bdev_nvme_attach_controller" 00:31:12.330 } 00:31:12.330 EOF 00:31:12.330 )") 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:12.330 "params": { 00:31:12.330 "name": "Nvme0", 00:31:12.330 "trtype": "tcp", 00:31:12.330 "traddr": "10.0.0.2", 00:31:12.330 "adrfam": "ipv4", 00:31:12.330 "trsvcid": "4420", 00:31:12.330 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:12.330 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:12.330 "hdgst": false, 00:31:12.330 "ddgst": false 00:31:12.330 }, 00:31:12.330 "method": "bdev_nvme_attach_controller" 00:31:12.330 }' 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:12.330 20:25:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:12.589 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:12.589 fio-3.35 00:31:12.589 Starting 1 thread 00:31:12.589 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.788 00:31:24.788 filename0: (groupid=0, jobs=1): err= 0: pid=2183120: Wed Jul 24 20:25:27 2024 00:31:24.788 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10017msec) 00:31:24.788 slat (nsec): min=5979, max=63536, avg=12278.43, stdev=3493.13 00:31:24.788 clat (usec): min=40880, max=43035, avg=41522.16, stdev=501.16 00:31:24.788 lat (usec): min=40890, max=43053, avg=41534.44, stdev=501.35 00:31:24.788 clat percentiles (usec): 00:31:24.788 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:24.788 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:31:24.788 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:24.788 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:31:24.788 | 99.99th=[43254] 00:31:24.788 bw ( KiB/s): min= 384, max= 384, per=99.75%, avg=384.00, stdev= 0.00, samples=20 00:31:24.788 iops : min= 96, max= 96, avg=96.00, stdev= 0.00, samples=20 00:31:24.788 lat (msec) : 50=100.00% 00:31:24.788 cpu : usr=89.12%, sys=10.51%, ctx=13, majf=0, minf=218 00:31:24.788 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:24.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.788 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.788 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.788 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:24.788 00:31:24.788 Run status group 0 (all jobs): 00:31:24.788 READ: bw=385KiB/s (394kB/s), 385KiB/s-385KiB/s (394kB/s-394kB/s), io=3856KiB (3949kB), run=10017-10017msec 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.788 00:31:24.788 real 0m11.473s 00:31:24.788 user 0m10.391s 00:31:24.788 sys 0m1.499s 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:24.788 ************************************ 00:31:24.788 END TEST fio_dif_1_default 00:31:24.788 ************************************ 00:31:24.788 20:25:27 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:24.788 20:25:27 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:24.788 20:25:27 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:24.788 20:25:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:24.788 ************************************ 00:31:24.788 START TEST fio_dif_1_multi_subsystems 00:31:24.788 ************************************ 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:24.788 bdev_null0 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:24.788 [2024-07-24 20:25:27.438567] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:24.788 bdev_null1 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:24.788 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:24.789 { 00:31:24.789 "params": { 00:31:24.789 "name": "Nvme$subsystem", 00:31:24.789 "trtype": "$TEST_TRANSPORT", 00:31:24.789 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:24.789 "adrfam": "ipv4", 00:31:24.789 "trsvcid": "$NVMF_PORT", 00:31:24.789 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:24.789 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:24.789 "hdgst": ${hdgst:-false}, 00:31:24.789 "ddgst": ${ddgst:-false} 00:31:24.789 }, 00:31:24.789 "method": "bdev_nvme_attach_controller" 00:31:24.789 } 00:31:24.789 EOF 00:31:24.789 )") 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:24.789 { 00:31:24.789 "params": { 00:31:24.789 "name": "Nvme$subsystem", 00:31:24.789 "trtype": "$TEST_TRANSPORT", 00:31:24.789 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:24.789 "adrfam": "ipv4", 00:31:24.789 "trsvcid": "$NVMF_PORT", 00:31:24.789 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:24.789 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:24.789 "hdgst": ${hdgst:-false}, 00:31:24.789 "ddgst": ${ddgst:-false} 00:31:24.789 }, 00:31:24.789 "method": "bdev_nvme_attach_controller" 00:31:24.789 } 00:31:24.789 EOF 00:31:24.789 )") 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:24.789 "params": { 00:31:24.789 "name": "Nvme0", 00:31:24.789 "trtype": "tcp", 00:31:24.789 "traddr": "10.0.0.2", 00:31:24.789 "adrfam": "ipv4", 00:31:24.789 "trsvcid": "4420", 00:31:24.789 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:24.789 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:24.789 "hdgst": false, 00:31:24.789 "ddgst": false 00:31:24.789 }, 00:31:24.789 "method": "bdev_nvme_attach_controller" 00:31:24.789 },{ 00:31:24.789 "params": { 00:31:24.789 "name": "Nvme1", 00:31:24.789 "trtype": "tcp", 00:31:24.789 "traddr": "10.0.0.2", 00:31:24.789 "adrfam": "ipv4", 00:31:24.789 "trsvcid": "4420", 00:31:24.789 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:24.789 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:24.789 "hdgst": false, 00:31:24.789 "ddgst": false 00:31:24.789 }, 00:31:24.789 "method": "bdev_nvme_attach_controller" 00:31:24.789 }' 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:24.789 20:25:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:24.789 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:24.789 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:24.789 fio-3.35 00:31:24.789 Starting 2 threads 00:31:24.789 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.000 00:31:37.000 filename0: (groupid=0, jobs=1): err= 0: pid=2184537: Wed Jul 24 20:25:38 2024 00:31:37.000 read: IOPS=183, BW=733KiB/s (751kB/s)(7344KiB/10013msec) 00:31:37.000 slat (nsec): min=6182, max=82598, avg=21636.56, stdev=8806.19 00:31:37.000 clat (usec): min=1120, max=45002, avg=21745.72, stdev=20292.55 00:31:37.000 lat (usec): min=1135, max=45020, avg=21767.36, stdev=20292.68 00:31:37.000 clat percentiles (usec): 00:31:37.000 | 1.00th=[ 1172], 5.00th=[ 1287], 10.00th=[ 1319], 20.00th=[ 1369], 00:31:37.000 | 30.00th=[ 1434], 40.00th=[ 1532], 50.00th=[41157], 60.00th=[41681], 00:31:37.000 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:37.000 | 99.00th=[42730], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:31:37.000 | 99.99th=[44827] 00:31:37.000 bw ( KiB/s): min= 704, max= 768, per=50.18%, avg=732.80, stdev=32.67, samples=20 00:31:37.000 iops : min= 176, max= 192, avg=183.20, stdev= 8.17, samples=20 00:31:37.000 lat (msec) : 2=49.89%, 50=50.11% 00:31:37.000 cpu : usr=94.59%, sys=4.84%, ctx=18, majf=0, minf=181 00:31:37.000 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:37.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.000 issued rwts: total=1836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.000 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:37.000 filename1: (groupid=0, jobs=1): err= 0: pid=2184538: Wed Jul 24 20:25:38 2024 00:31:37.000 read: IOPS=181, BW=725KiB/s (743kB/s)(7264KiB/10015msec) 00:31:37.001 slat (nsec): min=10439, max=60181, avg=21810.09, stdev=8512.04 00:31:37.001 clat (usec): min=884, max=47404, avg=21992.69, stdev=20343.92 00:31:37.001 lat (usec): min=894, max=47434, avg=22014.50, stdev=20344.19 00:31:37.001 clat percentiles (usec): 00:31:37.001 | 1.00th=[ 1090], 5.00th=[ 1205], 10.00th=[ 1254], 20.00th=[ 1287], 00:31:37.001 | 30.00th=[ 1401], 40.00th=[ 1549], 50.00th=[41681], 60.00th=[41681], 00:31:37.001 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:31:37.001 | 99.00th=[43254], 99.50th=[43254], 99.90th=[47449], 99.95th=[47449], 00:31:37.001 | 99.99th=[47449] 00:31:37.001 bw ( KiB/s): min= 640, max= 768, per=49.64%, avg=724.80, stdev=39.23, samples=20 00:31:37.001 iops : min= 160, max= 192, avg=181.20, stdev= 9.81, samples=20 00:31:37.001 lat (usec) : 1000=0.33% 00:31:37.001 lat (msec) : 2=48.46%, 4=0.55%, 50=50.66% 00:31:37.001 cpu : usr=94.71%, sys=4.72%, ctx=13, majf=0, minf=47 00:31:37.001 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:37.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.001 issued rwts: total=1816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.001 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:37.001 00:31:37.001 Run status group 0 (all jobs): 00:31:37.001 READ: bw=1459KiB/s (1494kB/s), 725KiB/s-733KiB/s (743kB/s-751kB/s), io=14.3MiB (15.0MB), run=10013-10015msec 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.001 00:31:37.001 real 0m11.588s 00:31:37.001 user 0m20.596s 00:31:37.001 sys 0m1.391s 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:37.001 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:37.001 ************************************ 00:31:37.001 END TEST fio_dif_1_multi_subsystems 00:31:37.001 ************************************ 00:31:37.001 20:25:39 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:37.001 20:25:39 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:37.001 20:25:39 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:37.001 20:25:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:37.001 ************************************ 00:31:37.001 START TEST fio_dif_rand_params 00:31:37.001 ************************************ 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:37.001 bdev_null0 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:37.001 [2024-07-24 20:25:39.095836] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:37.001 { 00:31:37.001 "params": { 00:31:37.001 "name": "Nvme$subsystem", 00:31:37.001 "trtype": "$TEST_TRANSPORT", 00:31:37.001 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:37.001 "adrfam": "ipv4", 00:31:37.001 "trsvcid": "$NVMF_PORT", 00:31:37.001 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:37.001 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:37.001 "hdgst": ${hdgst:-false}, 00:31:37.001 "ddgst": ${ddgst:-false} 00:31:37.001 }, 00:31:37.001 "method": "bdev_nvme_attach_controller" 00:31:37.001 } 00:31:37.001 EOF 00:31:37.001 )") 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:37.001 20:25:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:37.002 20:25:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:37.002 "params": { 00:31:37.002 "name": "Nvme0", 00:31:37.002 "trtype": "tcp", 00:31:37.002 "traddr": "10.0.0.2", 00:31:37.002 "adrfam": "ipv4", 00:31:37.002 "trsvcid": "4420", 00:31:37.002 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:37.002 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:37.002 "hdgst": false, 00:31:37.002 "ddgst": false 00:31:37.002 }, 00:31:37.002 "method": "bdev_nvme_attach_controller" 00:31:37.002 }' 00:31:37.002 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:37.002 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:37.002 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:37.002 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:37.002 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:37.002 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:37.002 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:37.002 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:37.002 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:37.002 20:25:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:37.002 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:37.002 ... 00:31:37.002 fio-3.35 00:31:37.002 Starting 3 threads 00:31:37.002 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.269 00:31:42.269 filename0: (groupid=0, jobs=1): err= 0: pid=2185926: Wed Jul 24 20:25:45 2024 00:31:42.269 read: IOPS=180, BW=22.6MiB/s (23.7MB/s)(113MiB/5007msec) 00:31:42.269 slat (nsec): min=10893, max=69335, avg=18106.27, stdev=3813.17 00:31:42.269 clat (usec): min=6201, max=62046, avg=16550.99, stdev=7756.20 00:31:42.269 lat (usec): min=6217, max=62063, avg=16569.10, stdev=7756.51 00:31:42.269 clat percentiles (usec): 00:31:42.269 | 1.00th=[ 7504], 5.00th=[ 9503], 10.00th=[10683], 20.00th=[11863], 00:31:42.269 | 30.00th=[13435], 40.00th=[14353], 50.00th=[15401], 60.00th=[16450], 00:31:42.269 | 70.00th=[17957], 80.00th=[19006], 90.00th=[20579], 95.00th=[22676], 00:31:42.269 | 99.00th=[56361], 99.50th=[59507], 99.90th=[62129], 99.95th=[62129], 00:31:42.269 | 99.99th=[62129] 00:31:42.269 bw ( KiB/s): min=18688, max=30208, per=34.59%, avg=23121.00, stdev=3157.80, samples=10 00:31:42.269 iops : min= 146, max= 236, avg=180.60, stdev=24.69, samples=10 00:31:42.269 lat (msec) : 10=6.62%, 20=79.25%, 50=11.37%, 100=2.76% 00:31:42.269 cpu : usr=92.59%, sys=6.85%, ctx=9, majf=0, minf=124 00:31:42.269 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.269 issued rwts: total=906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.269 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:42.269 filename0: (groupid=0, jobs=1): err= 0: pid=2185927: Wed Jul 24 20:25:45 2024 00:31:42.269 read: IOPS=170, BW=21.4MiB/s (22.4MB/s)(108MiB/5048msec) 00:31:42.269 slat (nsec): min=6238, max=62501, avg=17939.33, stdev=3662.61 00:31:42.269 clat (usec): min=6110, max=67047, avg=17468.60, stdev=11104.38 00:31:42.269 lat (usec): min=6127, max=67081, avg=17486.54, stdev=11104.86 00:31:42.269 clat percentiles (usec): 00:31:42.269 | 1.00th=[ 6980], 5.00th=[ 9503], 10.00th=[10552], 20.00th=[12125], 00:31:42.269 | 30.00th=[13304], 40.00th=[14091], 50.00th=[14615], 60.00th=[15401], 00:31:42.269 | 70.00th=[16450], 80.00th=[17695], 90.00th=[20579], 95.00th=[53216], 00:31:42.269 | 99.00th=[58459], 99.50th=[62653], 99.90th=[66847], 99.95th=[66847], 00:31:42.269 | 99.99th=[66847] 00:31:42.269 bw ( KiB/s): min=12288, max=27648, per=32.98%, avg=22041.60, stdev=5314.63, samples=10 00:31:42.269 iops : min= 96, max= 216, avg=172.20, stdev=41.52, samples=10 00:31:42.269 lat (msec) : 10=7.18%, 20=81.81%, 50=4.17%, 100=6.84% 00:31:42.269 cpu : usr=91.60%, sys=7.83%, ctx=8, majf=0, minf=45 00:31:42.269 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.269 issued rwts: total=863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.269 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:42.269 filename0: (groupid=0, jobs=1): err= 0: pid=2185928: Wed Jul 24 20:25:45 2024 00:31:42.269 read: IOPS=171, BW=21.5MiB/s (22.5MB/s)(108MiB/5047msec) 00:31:42.269 slat (nsec): min=6350, max=56386, avg=17642.70, stdev=3208.10 00:31:42.269 clat (usec): min=6469, max=95347, avg=17389.57, stdev=8533.64 00:31:42.269 lat (usec): min=6485, max=95364, avg=17407.22, stdev=8534.04 00:31:42.269 clat percentiles (usec): 00:31:42.269 | 1.00th=[ 9765], 5.00th=[10814], 10.00th=[11469], 20.00th=[13042], 00:31:42.269 | 30.00th=[14091], 40.00th=[15139], 50.00th=[16057], 60.00th=[16909], 00:31:42.269 | 70.00th=[17957], 80.00th=[19006], 90.00th=[20841], 95.00th=[23200], 00:31:42.269 | 99.00th=[55837], 99.50th=[57934], 99.90th=[94897], 99.95th=[94897], 00:31:42.269 | 99.99th=[94897] 00:31:42.269 bw ( KiB/s): min=17664, max=26368, per=33.09%, avg=22118.40, stdev=2856.31, samples=10 00:31:42.269 iops : min= 138, max= 206, avg=172.80, stdev=22.31, samples=10 00:31:42.269 lat (msec) : 10=1.85%, 20=84.43%, 50=10.61%, 100=3.11% 00:31:42.269 cpu : usr=91.72%, sys=7.71%, ctx=7, majf=0, minf=109 00:31:42.269 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.269 issued rwts: total=867,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.269 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:42.269 00:31:42.269 Run status group 0 (all jobs): 00:31:42.269 READ: bw=65.3MiB/s (68.4MB/s), 21.4MiB/s-22.6MiB/s (22.4MB/s-23.7MB/s), io=330MiB (346MB), run=5007-5048msec 00:31:42.269 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:42.269 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:42.269 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:42.269 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:42.269 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:42.269 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:42.269 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.269 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.269 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.269 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:42.269 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.269 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.269 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.269 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:42.269 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.270 bdev_null0 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.270 [2024-07-24 20:25:45.523720] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.270 bdev_null1 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.270 bdev_null2 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:42.270 { 00:31:42.270 "params": { 00:31:42.270 "name": "Nvme$subsystem", 00:31:42.270 "trtype": "$TEST_TRANSPORT", 00:31:42.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:42.270 "adrfam": "ipv4", 00:31:42.270 "trsvcid": "$NVMF_PORT", 00:31:42.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:42.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:42.270 "hdgst": ${hdgst:-false}, 00:31:42.270 "ddgst": ${ddgst:-false} 00:31:42.270 }, 00:31:42.270 "method": "bdev_nvme_attach_controller" 00:31:42.270 } 00:31:42.270 EOF 00:31:42.270 )") 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:42.270 { 00:31:42.270 "params": { 00:31:42.270 "name": "Nvme$subsystem", 00:31:42.270 "trtype": "$TEST_TRANSPORT", 00:31:42.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:42.270 "adrfam": "ipv4", 00:31:42.270 "trsvcid": "$NVMF_PORT", 00:31:42.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:42.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:42.270 "hdgst": ${hdgst:-false}, 00:31:42.270 "ddgst": ${ddgst:-false} 00:31:42.270 }, 00:31:42.270 "method": "bdev_nvme_attach_controller" 00:31:42.270 } 00:31:42.270 EOF 00:31:42.270 )") 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:42.270 20:25:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:42.271 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:42.271 20:25:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:42.271 { 00:31:42.271 "params": { 00:31:42.271 "name": "Nvme$subsystem", 00:31:42.271 "trtype": "$TEST_TRANSPORT", 00:31:42.271 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:42.271 "adrfam": "ipv4", 00:31:42.271 "trsvcid": "$NVMF_PORT", 00:31:42.271 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:42.271 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:42.271 "hdgst": ${hdgst:-false}, 00:31:42.271 "ddgst": ${ddgst:-false} 00:31:42.271 }, 00:31:42.271 "method": "bdev_nvme_attach_controller" 00:31:42.271 } 00:31:42.271 EOF 00:31:42.271 )") 00:31:42.271 20:25:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:42.271 20:25:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:42.271 20:25:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:42.271 20:25:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:42.271 20:25:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:42.271 "params": { 00:31:42.271 "name": "Nvme0", 00:31:42.271 "trtype": "tcp", 00:31:42.271 "traddr": "10.0.0.2", 00:31:42.271 "adrfam": "ipv4", 00:31:42.271 "trsvcid": "4420", 00:31:42.271 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:42.271 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:42.271 "hdgst": false, 00:31:42.271 "ddgst": false 00:31:42.271 }, 00:31:42.271 "method": "bdev_nvme_attach_controller" 00:31:42.271 },{ 00:31:42.271 "params": { 00:31:42.271 "name": "Nvme1", 00:31:42.271 "trtype": "tcp", 00:31:42.271 "traddr": "10.0.0.2", 00:31:42.271 "adrfam": "ipv4", 00:31:42.271 "trsvcid": "4420", 00:31:42.271 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:42.271 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:42.271 "hdgst": false, 00:31:42.271 "ddgst": false 00:31:42.271 }, 00:31:42.271 "method": "bdev_nvme_attach_controller" 00:31:42.271 },{ 00:31:42.271 "params": { 00:31:42.271 "name": "Nvme2", 00:31:42.271 "trtype": "tcp", 00:31:42.271 "traddr": "10.0.0.2", 00:31:42.271 "adrfam": "ipv4", 00:31:42.271 "trsvcid": "4420", 00:31:42.271 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:42.271 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:42.271 "hdgst": false, 00:31:42.271 "ddgst": false 00:31:42.271 }, 00:31:42.271 "method": "bdev_nvme_attach_controller" 00:31:42.271 }' 00:31:42.271 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:42.271 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:42.271 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:42.271 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.271 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:42.271 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:42.271 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:42.271 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:42.271 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:42.271 20:25:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.271 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:42.271 ... 00:31:42.271 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:42.271 ... 00:31:42.271 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:42.271 ... 00:31:42.271 fio-3.35 00:31:42.271 Starting 24 threads 00:31:42.271 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.492 00:31:54.492 filename0: (groupid=0, jobs=1): err= 0: pid=2186790: Wed Jul 24 20:25:57 2024 00:31:54.492 read: IOPS=371, BW=1486KiB/s (1521kB/s)(14.6MiB/10037msec) 00:31:54.492 slat (usec): min=7, max=137, avg=27.00, stdev=23.07 00:31:54.492 clat (usec): min=21256, max=48471, avg=42839.60, stdev=2845.04 00:31:54.492 lat (usec): min=21285, max=48492, avg=42866.60, stdev=2845.55 00:31:54.492 clat percentiles (usec): 00:31:54.492 | 1.00th=[23725], 5.00th=[42206], 10.00th=[42206], 20.00th=[42730], 00:31:54.492 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:31:54.492 | 70.00th=[43254], 80.00th=[43254], 90.00th=[44827], 95.00th=[45876], 00:31:54.492 | 99.00th=[47973], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:31:54.492 | 99.99th=[48497] 00:31:54.492 bw ( KiB/s): min= 1408, max= 1536, per=4.22%, avg=1484.80, stdev=64.34, samples=20 00:31:54.492 iops : min= 352, max= 384, avg=371.20, stdev=16.08, samples=20 00:31:54.492 lat (msec) : 50=100.00% 00:31:54.492 cpu : usr=96.44%, sys=2.15%, ctx=43, majf=0, minf=30 00:31:54.492 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:54.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.492 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.492 issued rwts: total=3728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.492 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:54.492 filename0: (groupid=0, jobs=1): err= 0: pid=2186791: Wed Jul 24 20:25:57 2024 00:31:54.492 read: IOPS=369, BW=1478KiB/s (1513kB/s)(14.4MiB/10006msec) 00:31:54.492 slat (usec): min=5, max=118, avg=21.08, stdev=18.80 00:31:54.492 clat (usec): min=23920, max=49769, avg=43110.36, stdev=1853.33 00:31:54.492 lat (usec): min=23995, max=49788, avg=43131.43, stdev=1854.64 00:31:54.492 clat percentiles (usec): 00:31:54.492 | 1.00th=[41157], 5.00th=[42206], 10.00th=[42730], 20.00th=[42730], 00:31:54.492 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:31:54.492 | 70.00th=[43254], 80.00th=[43254], 90.00th=[44303], 95.00th=[45876], 00:31:54.492 | 99.00th=[47973], 99.50th=[48497], 99.90th=[49546], 99.95th=[49546], 00:31:54.492 | 99.99th=[49546] 00:31:54.492 bw ( KiB/s): min= 1280, max= 1536, per=4.19%, avg=1475.37, stdev=78.31, samples=19 00:31:54.492 iops : min= 320, max= 384, avg=368.84, stdev=19.58, samples=19 00:31:54.492 lat (msec) : 50=100.00% 00:31:54.492 cpu : usr=96.27%, sys=2.28%, ctx=40, majf=0, minf=15 00:31:54.492 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:54.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.492 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.492 issued rwts: total=3696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.492 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:54.492 filename0: (groupid=0, jobs=1): err= 0: pid=2186792: Wed Jul 24 20:25:57 2024 00:31:54.492 read: IOPS=367, BW=1470KiB/s (1506kB/s)(14.4MiB/10011msec) 00:31:54.492 slat (usec): min=11, max=143, avg=45.12, stdev=17.07 00:31:54.492 clat (usec): min=30391, max=82242, avg=43093.20, stdev=2416.36 00:31:54.492 lat (usec): min=30437, max=82281, avg=43138.32, stdev=2416.90 00:31:54.492 clat percentiles (usec): 00:31:54.493 | 1.00th=[41681], 5.00th=[42206], 10.00th=[42206], 20.00th=[42206], 00:31:54.493 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:31:54.493 | 70.00th=[42730], 80.00th=[43254], 90.00th=[44303], 95.00th=[45876], 00:31:54.493 | 99.00th=[47973], 99.50th=[49546], 99.90th=[70779], 99.95th=[81265], 00:31:54.493 | 99.99th=[82314] 00:31:54.493 bw ( KiB/s): min= 1282, max= 1536, per=4.16%, avg=1465.35, stdev=76.84, samples=20 00:31:54.493 iops : min= 320, max= 384, avg=366.30, stdev=19.26, samples=20 00:31:54.493 lat (msec) : 50=99.57%, 100=0.43% 00:31:54.493 cpu : usr=96.49%, sys=2.12%, ctx=287, majf=0, minf=15 00:31:54.493 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:54.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.493 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.493 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.493 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:54.493 filename0: (groupid=0, jobs=1): err= 0: pid=2186793: Wed Jul 24 20:25:57 2024 00:31:54.493 read: IOPS=365, BW=1464KiB/s (1499kB/s)(14.4MiB/10066msec) 00:31:54.493 slat (usec): min=8, max=148, avg=41.47, stdev=25.74 00:31:54.493 clat (usec): min=41166, max=65354, avg=43156.55, stdev=1435.72 00:31:54.493 lat (usec): min=41252, max=65465, avg=43198.02, stdev=1430.57 00:31:54.493 clat percentiles (usec): 00:31:54.493 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:31:54.493 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:31:54.493 | 70.00th=[43254], 80.00th=[43254], 90.00th=[44303], 95.00th=[45876], 00:31:54.493 | 99.00th=[47973], 99.50th=[49021], 99.90th=[65274], 99.95th=[65274], 00:31:54.493 | 99.99th=[65274] 00:31:54.493 bw ( KiB/s): min= 1280, max= 1536, per=4.18%, avg=1472.00, stdev=77.69, samples=20 00:31:54.493 iops : min= 320, max= 384, avg=368.00, stdev=19.42, samples=20 00:31:54.493 lat (msec) : 50=99.89%, 100=0.11% 00:31:54.493 cpu : usr=97.35%, sys=1.85%, ctx=37, majf=0, minf=29 00:31:54.493 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=49.9%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:54.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.493 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.493 issued rwts: total=3684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.493 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:54.493 filename0: (groupid=0, jobs=1): err= 0: pid=2186794: Wed Jul 24 20:25:57 2024 00:31:54.493 read: IOPS=368, BW=1474KiB/s (1509kB/s)(14.4MiB/10031msec) 00:31:54.493 slat (nsec): min=12385, max=98328, avg=43010.84, stdev=13770.23 00:31:54.493 clat (usec): min=30541, max=56750, avg=43057.75, stdev=1514.69 00:31:54.493 lat (usec): min=30571, max=56770, avg=43100.76, stdev=1514.57 00:31:54.493 clat percentiles (usec): 00:31:54.493 | 1.00th=[41681], 5.00th=[42206], 10.00th=[42206], 20.00th=[42206], 00:31:54.493 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:31:54.493 | 70.00th=[43254], 80.00th=[43254], 90.00th=[44303], 95.00th=[45876], 00:31:54.493 | 99.00th=[47973], 99.50th=[49021], 99.90th=[52691], 99.95th=[56361], 00:31:54.493 | 99.99th=[56886] 00:31:54.493 bw ( KiB/s): min= 1280, max= 1536, per=4.18%, avg=1472.00, stdev=77.69, samples=20 00:31:54.493 iops : min= 320, max= 384, avg=368.00, stdev=19.42, samples=20 00:31:54.493 lat (msec) : 50=99.89%, 100=0.11% 00:31:54.493 cpu : usr=97.27%, sys=2.02%, ctx=57, majf=0, minf=23 00:31:54.493 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:54.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.493 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.493 issued rwts: total=3696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.493 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:54.493 filename0: (groupid=0, jobs=1): err= 0: pid=2186795: Wed Jul 24 20:25:57 2024 00:31:54.493 read: IOPS=367, BW=1468KiB/s (1504kB/s)(14.4MiB/10025msec) 00:31:54.493 slat (nsec): min=10792, max=96336, avg=32952.21, stdev=15441.05 00:31:54.493 clat (usec): min=37848, max=82461, avg=43306.51, stdev=2830.54 00:31:54.493 lat (usec): min=37873, max=82505, avg=43339.46, stdev=2830.50 00:31:54.493 clat percentiles (usec): 00:31:54.493 | 1.00th=[42206], 5.00th=[42206], 10.00th=[42206], 20.00th=[42730], 00:31:54.493 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:31:54.493 | 70.00th=[43254], 80.00th=[43254], 90.00th=[44303], 95.00th=[45876], 00:31:54.493 | 99.00th=[48497], 99.50th=[49546], 99.90th=[82314], 99.95th=[82314], 00:31:54.493 | 99.99th=[82314] 00:31:54.493 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1465.60, stdev=77.42, samples=20 00:31:54.493 iops : min= 320, max= 384, avg=366.40, stdev=19.35, samples=20 00:31:54.493 lat (msec) : 50=99.57%, 100=0.43% 00:31:54.493 cpu : usr=97.50%, sys=1.84%, ctx=33, majf=0, minf=17 00:31:54.493 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:54.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.493 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.493 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.493 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:54.493 filename0: (groupid=0, jobs=1): err= 0: pid=2186796: Wed Jul 24 20:25:57 2024 00:31:54.493 read: IOPS=367, BW=1470KiB/s (1505kB/s)(14.4MiB/10014msec) 00:31:54.493 slat (nsec): min=4360, max=69111, avg=33249.02, stdev=9312.18 00:31:54.493 clat (usec): min=37904, max=71106, avg=43243.32, stdev=2188.97 00:31:54.493 lat (usec): min=37935, max=71127, avg=43276.57, stdev=2187.49 00:31:54.493 clat percentiles (usec): 00:31:54.493 | 1.00th=[42206], 5.00th=[42206], 10.00th=[42206], 20.00th=[42730], 00:31:54.493 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:31:54.493 | 70.00th=[43254], 80.00th=[43254], 90.00th=[44303], 95.00th=[45876], 00:31:54.493 | 99.00th=[48497], 99.50th=[49546], 99.90th=[70779], 99.95th=[70779], 00:31:54.493 | 99.99th=[70779] 00:31:54.493 bw ( KiB/s): min= 1282, max= 1536, per=4.17%, avg=1468.74, stdev=78.04, samples=19 00:31:54.493 iops : min= 320, max= 384, avg=367.16, stdev=19.58, samples=19 00:31:54.493 lat (msec) : 50=99.57%, 100=0.43% 00:31:54.493 cpu : usr=98.10%, sys=1.33%, ctx=60, majf=0, minf=19 00:31:54.493 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:54.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.493 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.493 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.493 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:54.493 filename0: (groupid=0, jobs=1): err= 0: pid=2186797: Wed Jul 24 20:25:57 2024 00:31:54.493 read: IOPS=367, BW=1470KiB/s (1506kB/s)(14.4MiB/10012msec) 00:31:54.493 slat (usec): min=13, max=134, avg=44.45, stdev=16.15 00:31:54.493 clat (usec): min=30440, max=70930, avg=43117.31, stdev=2301.99 00:31:54.493 lat (usec): min=30478, max=70976, avg=43161.77, stdev=2302.82 00:31:54.493 clat percentiles (usec): 00:31:54.493 | 1.00th=[41681], 5.00th=[42206], 10.00th=[42206], 20.00th=[42206], 00:31:54.493 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:31:54.493 | 70.00th=[42730], 80.00th=[43254], 90.00th=[44303], 95.00th=[45876], 00:31:54.493 | 99.00th=[48497], 99.50th=[49546], 99.90th=[70779], 99.95th=[70779], 00:31:54.493 | 99.99th=[70779] 00:31:54.493 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1465.60, stdev=77.42, samples=20 00:31:54.493 iops : min= 320, max= 384, avg=366.40, stdev=19.35, samples=20 00:31:54.493 lat (msec) : 50=99.57%, 100=0.43% 00:31:54.493 cpu : usr=95.87%, sys=2.57%, ctx=76, majf=0, minf=31 00:31:54.493 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:54.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.493 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.493 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.493 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:54.493 filename1: (groupid=0, jobs=1): err= 0: pid=2186798: Wed Jul 24 20:25:57 2024 00:31:54.493 read: IOPS=368, BW=1474KiB/s (1509kB/s)(14.4MiB/10032msec) 00:31:54.493 slat (nsec): min=8872, max=88678, avg=38199.45, stdev=13754.36 00:31:54.493 clat (usec): min=30435, max=54390, avg=43112.68, stdev=1469.02 00:31:54.493 lat (usec): min=30496, max=54415, avg=43150.88, stdev=1467.42 00:31:54.493 clat percentiles (usec): 00:31:54.493 | 1.00th=[42206], 5.00th=[42206], 10.00th=[42206], 20.00th=[42730], 00:31:54.493 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:31:54.493 | 70.00th=[43254], 80.00th=[43254], 90.00th=[44303], 95.00th=[45876], 00:31:54.493 | 99.00th=[48497], 99.50th=[48497], 99.90th=[49546], 99.95th=[54264], 00:31:54.493 | 99.99th=[54264] 00:31:54.493 bw ( KiB/s): min= 1280, max= 1536, per=4.18%, avg=1472.00, stdev=77.69, samples=20 00:31:54.493 iops : min= 320, max= 384, avg=368.00, stdev=19.42, samples=20 00:31:54.493 lat (msec) : 50=99.95%, 100=0.05% 00:31:54.493 cpu : usr=97.70%, sys=1.53%, ctx=105, majf=0, minf=20 00:31:54.493 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:54.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.493 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.493 issued rwts: total=3696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.493 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:54.493 filename1: (groupid=0, jobs=1): err= 0: pid=2186799: Wed Jul 24 20:25:57 2024 00:31:54.493 read: IOPS=367, BW=1471KiB/s (1506kB/s)(14.4MiB/10008msec) 00:31:54.493 slat (usec): min=6, max=104, avg=37.75, stdev=16.74 00:31:54.493 clat (usec): min=37912, max=65981, avg=43164.75, stdev=1906.42 00:31:54.493 lat (usec): min=37945, max=66000, avg=43202.49, stdev=1906.20 00:31:54.493 clat percentiles (usec): 00:31:54.493 | 1.00th=[41681], 5.00th=[42206], 10.00th=[42206], 20.00th=[42730], 00:31:54.493 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:31:54.493 | 70.00th=[42730], 80.00th=[43254], 90.00th=[44303], 95.00th=[45876], 00:31:54.493 | 99.00th=[48497], 99.50th=[49546], 99.90th=[65799], 99.95th=[65799], 00:31:54.493 | 99.99th=[65799] 00:31:54.493 bw ( KiB/s): min= 1280, max= 1536, per=4.17%, avg=1468.63, stdev=78.31, samples=19 00:31:54.493 iops : min= 320, max= 384, avg=367.16, stdev=19.58, samples=19 00:31:54.493 lat (msec) : 50=99.57%, 100=0.43% 00:31:54.493 cpu : usr=97.45%, sys=1.74%, ctx=22, majf=0, minf=17 00:31:54.493 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:54.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.494 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.494 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.494 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:54.494 filename1: (groupid=0, jobs=1): err= 0: pid=2186800: Wed Jul 24 20:25:57 2024 00:31:54.494 read: IOPS=368, BW=1474KiB/s (1509kB/s)(14.4MiB/10031msec) 00:31:54.494 slat (usec): min=9, max=179, avg=46.49, stdev=20.52 00:31:54.494 clat (usec): min=30464, max=52550, avg=43004.50, stdev=1448.21 00:31:54.494 lat (usec): min=30516, max=52575, avg=43050.99, stdev=1451.69 00:31:54.494 clat percentiles (usec): 00:31:54.494 | 1.00th=[41681], 5.00th=[42206], 10.00th=[42206], 20.00th=[42206], 00:31:54.494 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:31:54.494 | 70.00th=[42730], 80.00th=[43254], 90.00th=[44303], 95.00th=[45876], 00:31:54.494 | 99.00th=[47973], 99.50th=[48497], 99.90th=[49546], 99.95th=[52691], 00:31:54.494 | 99.99th=[52691] 00:31:54.494 bw ( KiB/s): min= 1280, max= 1536, per=4.18%, avg=1472.00, stdev=77.69, samples=20 00:31:54.494 iops : min= 320, max= 384, avg=368.00, stdev=19.42, samples=20 00:31:54.494 lat (msec) : 50=99.95%, 100=0.05% 00:31:54.494 cpu : usr=97.28%, sys=1.73%, ctx=104, majf=0, minf=21 00:31:54.494 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:54.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.494 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.494 issued rwts: total=3696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.494 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:54.494 filename1: (groupid=0, jobs=1): err= 0: pid=2186801: Wed Jul 24 20:25:57 2024 00:31:54.494 read: IOPS=370, BW=1481KiB/s (1517kB/s)(14.5MiB/10024msec) 00:31:54.494 slat (usec): min=8, max=167, avg=37.94, stdev=17.66 00:31:54.494 clat (usec): min=17748, max=49668, avg=42890.18, stdev=2547.97 00:31:54.494 lat (usec): min=17760, max=49690, avg=42928.12, stdev=2549.23 00:31:54.494 clat percentiles (usec): 00:31:54.494 | 1.00th=[27657], 5.00th=[42206], 10.00th=[42206], 20.00th=[42730], 00:31:54.494 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:31:54.494 | 70.00th=[43254], 80.00th=[43254], 90.00th=[44303], 95.00th=[45876], 00:31:54.494 | 99.00th=[47973], 99.50th=[48497], 99.90th=[49546], 99.95th=[49546], 00:31:54.494 | 99.99th=[49546] 00:31:54.494 bw ( KiB/s): min= 1408, max= 1536, per=4.20%, avg=1478.40, stdev=65.33, samples=20 00:31:54.494 iops : min= 352, max= 384, avg=369.60, stdev=16.33, samples=20 00:31:54.494 lat (msec) : 20=0.43%, 50=99.57% 00:31:54.494 cpu : usr=93.00%, sys=3.59%, ctx=261, majf=0, minf=21 00:31:54.494 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:54.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.494 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.494 issued rwts: total=3712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.494 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:54.494 filename1: (groupid=0, jobs=1): err= 0: pid=2186802: Wed Jul 24 20:25:57 2024 00:31:54.494 read: IOPS=371, BW=1486KiB/s (1521kB/s)(14.6MiB/10038msec) 00:31:54.494 slat (usec): min=8, max=145, avg=27.99, stdev=27.72 00:31:54.494 clat (usec): min=19728, max=48499, avg=42830.87, stdev=2830.74 00:31:54.494 lat (usec): min=19737, max=48517, avg=42858.86, stdev=2829.79 00:31:54.494 clat percentiles (usec): 00:31:54.494 | 1.00th=[24773], 5.00th=[41681], 10.00th=[42206], 20.00th=[42730], 00:31:54.494 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:31:54.494 | 70.00th=[43254], 80.00th=[43254], 90.00th=[44827], 95.00th=[45351], 00:31:54.494 | 99.00th=[47973], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:31:54.494 | 99.99th=[48497] 00:31:54.494 bw ( KiB/s): min= 1408, max= 1536, per=4.22%, avg=1484.80, stdev=64.34, samples=20 00:31:54.494 iops : min= 352, max= 384, avg=371.20, stdev=16.08, samples=20 00:31:54.494 lat (msec) : 20=0.19%, 50=99.81% 00:31:54.494 cpu : usr=95.48%, sys=2.74%, ctx=147, majf=0, minf=34 00:31:54.494 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:54.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.494 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.494 issued rwts: total=3728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.494 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:54.494 filename1: (groupid=0, jobs=1): err= 0: pid=2186803: Wed Jul 24 20:25:57 2024 00:31:54.494 read: IOPS=367, BW=1470KiB/s (1506kB/s)(14.4MiB/10012msec) 00:31:54.494 slat (nsec): min=11510, max=91690, avg=35050.94, stdev=12154.66 00:31:54.494 clat (usec): min=22897, max=81972, avg=43220.21, stdev=3067.40 00:31:54.494 lat (usec): min=22915, max=82008, avg=43255.26, stdev=3067.51 00:31:54.494 clat percentiles (usec): 00:31:54.494 | 1.00th=[42206], 5.00th=[42206], 10.00th=[42206], 20.00th=[42730], 00:31:54.494 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:31:54.494 | 70.00th=[43254], 80.00th=[43254], 90.00th=[44827], 95.00th=[45876], 00:31:54.494 | 99.00th=[47973], 99.50th=[48497], 99.90th=[82314], 99.95th=[82314], 00:31:54.494 | 99.99th=[82314] 00:31:54.494 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1465.60, stdev=77.42, samples=20 00:31:54.494 iops : min= 320, max= 384, avg=366.40, stdev=19.35, samples=20 00:31:54.494 lat (msec) : 50=99.57%, 100=0.43% 00:31:54.494 cpu : usr=97.68%, sys=1.73%, ctx=23, majf=0, minf=19 00:31:54.494 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:54.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.494 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.494 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.494 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:54.494 filename1: (groupid=0, jobs=1): err= 0: pid=2186804: Wed Jul 24 20:25:57 2024 00:31:54.494 read: IOPS=367, BW=1469KiB/s (1504kB/s)(14.4MiB/10023msec) 00:31:54.494 slat (nsec): min=5449, max=99999, avg=41180.99, stdev=13946.21 00:31:54.494 clat (usec): min=30426, max=80423, avg=43171.35, stdev=2839.19 00:31:54.494 lat (usec): min=30455, max=80441, avg=43212.53, stdev=2837.62 00:31:54.494 clat percentiles (usec): 00:31:54.494 | 1.00th=[42206], 5.00th=[42206], 10.00th=[42206], 20.00th=[42206], 00:31:54.494 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:31:54.494 | 70.00th=[42730], 80.00th=[43254], 90.00th=[44303], 95.00th=[45876], 00:31:54.494 | 99.00th=[48497], 99.50th=[49546], 99.90th=[80217], 99.95th=[80217], 00:31:54.494 | 99.99th=[80217] 00:31:54.494 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1464.05, stdev=76.23, samples=20 00:31:54.494 iops : min= 320, max= 384, avg=366.00, stdev=19.05, samples=20 00:31:54.494 lat (msec) : 50=99.57%, 100=0.43% 00:31:54.494 cpu : usr=97.70%, sys=1.83%, ctx=15, majf=0, minf=26 00:31:54.494 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:54.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.494 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.494 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.494 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:54.494 filename1: (groupid=0, jobs=1): err= 0: pid=2186805: Wed Jul 24 20:25:57 2024 00:31:54.494 read: IOPS=368, BW=1473KiB/s (1508kB/s)(14.4MiB/10031msec) 00:31:54.494 slat (usec): min=9, max=111, avg=46.49, stdev=16.60 00:31:54.494 clat (usec): min=30335, max=51184, avg=43013.88, stdev=1417.87 00:31:54.494 lat (usec): min=30387, max=51237, avg=43060.37, stdev=1419.68 00:31:54.494 clat percentiles (usec): 00:31:54.494 | 1.00th=[41681], 5.00th=[42206], 10.00th=[42206], 20.00th=[42206], 00:31:54.494 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:31:54.494 | 70.00th=[43254], 80.00th=[43254], 90.00th=[44303], 95.00th=[45876], 00:31:54.494 | 99.00th=[47973], 99.50th=[48497], 99.90th=[49546], 99.95th=[51119], 00:31:54.494 | 99.99th=[51119] 00:31:54.494 bw ( KiB/s): min= 1280, max= 1536, per=4.18%, avg=1472.00, stdev=77.69, samples=20 00:31:54.494 iops : min= 320, max= 384, avg=368.00, stdev=19.42, samples=20 00:31:54.494 lat (msec) : 50=99.95%, 100=0.05% 00:31:54.494 cpu : usr=96.98%, sys=2.08%, ctx=118, majf=0, minf=28 00:31:54.494 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:54.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.494 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.494 issued rwts: total=3693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.494 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:54.494 filename2: (groupid=0, jobs=1): err= 0: pid=2186806: Wed Jul 24 20:25:57 2024 00:31:54.494 read: IOPS=367, BW=1470KiB/s (1506kB/s)(14.4MiB/10011msec) 00:31:54.494 slat (usec): min=10, max=118, avg=43.97, stdev=14.80 00:31:54.494 clat (usec): min=30440, max=71054, avg=43111.23, stdev=2326.74 00:31:54.494 lat (usec): min=30478, max=71094, avg=43155.20, stdev=2327.32 00:31:54.494 clat percentiles (usec): 00:31:54.494 | 1.00th=[41681], 5.00th=[42206], 10.00th=[42206], 20.00th=[42206], 00:31:54.494 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:31:54.494 | 70.00th=[42730], 80.00th=[43254], 90.00th=[44303], 95.00th=[45876], 00:31:54.494 | 99.00th=[48497], 99.50th=[49546], 99.90th=[70779], 99.95th=[70779], 00:31:54.494 | 99.99th=[70779] 00:31:54.494 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1465.60, stdev=77.42, samples=20 00:31:54.494 iops : min= 320, max= 384, avg=366.40, stdev=19.35, samples=20 00:31:54.494 lat (msec) : 50=99.51%, 100=0.49% 00:31:54.494 cpu : usr=96.57%, sys=2.07%, ctx=78, majf=0, minf=19 00:31:54.494 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:54.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.494 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.494 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.494 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:54.494 filename2: (groupid=0, jobs=1): err= 0: pid=2186807: Wed Jul 24 20:25:57 2024 00:31:54.494 read: IOPS=367, BW=1471KiB/s (1506kB/s)(14.4MiB/10006msec) 00:31:54.494 slat (usec): min=6, max=113, avg=39.17, stdev=17.18 00:31:54.494 clat (usec): min=30236, max=71531, avg=43159.04, stdev=1895.30 00:31:54.494 lat (usec): min=30250, max=71546, avg=43198.21, stdev=1894.82 00:31:54.494 clat percentiles (usec): 00:31:54.494 | 1.00th=[41681], 5.00th=[42206], 10.00th=[42206], 20.00th=[42730], 00:31:54.494 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:31:54.494 | 70.00th=[42730], 80.00th=[43254], 90.00th=[44303], 95.00th=[45876], 00:31:54.495 | 99.00th=[48497], 99.50th=[49546], 99.90th=[63701], 99.95th=[71828], 00:31:54.495 | 99.99th=[71828] 00:31:54.495 bw ( KiB/s): min= 1280, max= 1536, per=4.17%, avg=1468.63, stdev=78.31, samples=19 00:31:54.495 iops : min= 320, max= 384, avg=367.16, stdev=19.58, samples=19 00:31:54.495 lat (msec) : 50=99.51%, 100=0.49% 00:31:54.495 cpu : usr=95.31%, sys=2.81%, ctx=256, majf=0, minf=26 00:31:54.495 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:54.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.495 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.495 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.495 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:54.495 filename2: (groupid=0, jobs=1): err= 0: pid=2186808: Wed Jul 24 20:25:57 2024 00:31:54.495 read: IOPS=370, BW=1481KiB/s (1516kB/s)(14.5MiB/10029msec) 00:31:54.495 slat (nsec): min=6188, max=89524, avg=32395.57, stdev=13107.34 00:31:54.495 clat (usec): min=19813, max=60849, avg=42932.33, stdev=2428.66 00:31:54.495 lat (usec): min=19823, max=60902, avg=42964.72, stdev=2429.42 00:31:54.495 clat percentiles (usec): 00:31:54.495 | 1.00th=[35390], 5.00th=[42206], 10.00th=[42206], 20.00th=[42730], 00:31:54.495 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:31:54.495 | 70.00th=[43254], 80.00th=[43254], 90.00th=[44827], 95.00th=[45876], 00:31:54.495 | 99.00th=[47973], 99.50th=[48497], 99.90th=[48497], 99.95th=[60556], 00:31:54.495 | 99.99th=[61080] 00:31:54.495 bw ( KiB/s): min= 1408, max= 1536, per=4.20%, avg=1478.40, stdev=65.33, samples=20 00:31:54.495 iops : min= 352, max= 384, avg=369.60, stdev=16.33, samples=20 00:31:54.495 lat (msec) : 20=0.40%, 50=99.54%, 100=0.05% 00:31:54.495 cpu : usr=93.75%, sys=3.24%, ctx=234, majf=0, minf=28 00:31:54.495 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:54.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.495 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.495 issued rwts: total=3712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.495 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:54.495 filename2: (groupid=0, jobs=1): err= 0: pid=2186809: Wed Jul 24 20:25:57 2024 00:31:54.495 read: IOPS=367, BW=1471KiB/s (1506kB/s)(14.4MiB/10009msec) 00:31:54.495 slat (nsec): min=11532, max=82729, avg=36044.15, stdev=11706.88 00:31:54.495 clat (usec): min=40919, max=57657, avg=43180.70, stdev=1459.39 00:31:54.495 lat (usec): min=40962, max=57685, avg=43216.75, stdev=1457.95 00:31:54.495 clat percentiles (usec): 00:31:54.495 | 1.00th=[42206], 5.00th=[42206], 10.00th=[42206], 20.00th=[42206], 00:31:54.495 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:31:54.495 | 70.00th=[42730], 80.00th=[43254], 90.00th=[44827], 95.00th=[45876], 00:31:54.495 | 99.00th=[47973], 99.50th=[48497], 99.90th=[57410], 99.95th=[57410], 00:31:54.495 | 99.99th=[57410] 00:31:54.495 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1465.60, stdev=77.42, samples=20 00:31:54.495 iops : min= 320, max= 384, avg=366.40, stdev=19.35, samples=20 00:31:54.495 lat (msec) : 50=99.57%, 100=0.43% 00:31:54.495 cpu : usr=96.26%, sys=2.26%, ctx=51, majf=0, minf=20 00:31:54.495 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:54.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.495 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.495 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.495 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:54.495 filename2: (groupid=0, jobs=1): err= 0: pid=2186810: Wed Jul 24 20:25:57 2024 00:31:54.495 read: IOPS=367, BW=1471KiB/s (1506kB/s)(14.4MiB/10009msec) 00:31:54.495 slat (usec): min=8, max=119, avg=32.69, stdev=25.27 00:31:54.495 clat (usec): min=20590, max=81693, avg=43229.49, stdev=1741.03 00:31:54.495 lat (usec): min=20652, max=81719, avg=43262.18, stdev=1740.29 00:31:54.495 clat percentiles (usec): 00:31:54.495 | 1.00th=[41681], 5.00th=[42206], 10.00th=[42206], 20.00th=[42730], 00:31:54.495 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:31:54.495 | 70.00th=[43254], 80.00th=[43254], 90.00th=[44827], 95.00th=[45876], 00:31:54.495 | 99.00th=[47973], 99.50th=[48497], 99.90th=[57410], 99.95th=[81265], 00:31:54.495 | 99.99th=[81265] 00:31:54.495 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1465.60, stdev=77.42, samples=20 00:31:54.495 iops : min= 320, max= 384, avg=366.40, stdev=19.35, samples=20 00:31:54.495 lat (msec) : 50=99.57%, 100=0.43% 00:31:54.495 cpu : usr=96.98%, sys=2.06%, ctx=143, majf=0, minf=20 00:31:54.495 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:54.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.495 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.495 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.495 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:54.495 filename2: (groupid=0, jobs=1): err= 0: pid=2186811: Wed Jul 24 20:25:57 2024 00:31:54.495 read: IOPS=367, BW=1470KiB/s (1506kB/s)(14.4MiB/10012msec) 00:31:54.495 slat (usec): min=11, max=112, avg=38.95, stdev=16.61 00:31:54.495 clat (usec): min=20015, max=81894, avg=43173.89, stdev=3508.04 00:31:54.495 lat (usec): min=20052, max=81938, avg=43212.84, stdev=3509.44 00:31:54.495 clat percentiles (usec): 00:31:54.495 | 1.00th=[41157], 5.00th=[42206], 10.00th=[42206], 20.00th=[42206], 00:31:54.495 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:31:54.495 | 70.00th=[42730], 80.00th=[43254], 90.00th=[44827], 95.00th=[45876], 00:31:54.495 | 99.00th=[47973], 99.50th=[68682], 99.90th=[81265], 99.95th=[82314], 00:31:54.495 | 99.99th=[82314] 00:31:54.495 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1465.60, stdev=77.42, samples=20 00:31:54.495 iops : min= 320, max= 384, avg=366.40, stdev=19.35, samples=20 00:31:54.495 lat (msec) : 50=99.32%, 100=0.68% 00:31:54.495 cpu : usr=97.40%, sys=1.78%, ctx=60, majf=0, minf=26 00:31:54.495 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:54.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.495 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.495 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.495 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:54.495 filename2: (groupid=0, jobs=1): err= 0: pid=2186812: Wed Jul 24 20:25:57 2024 00:31:54.495 read: IOPS=368, BW=1473KiB/s (1508kB/s)(14.4MiB/10031msec) 00:31:54.495 slat (usec): min=8, max=103, avg=43.12, stdev=14.01 00:31:54.495 clat (usec): min=30418, max=49615, avg=43048.01, stdev=1409.47 00:31:54.495 lat (usec): min=30488, max=49656, avg=43091.13, stdev=1409.12 00:31:54.495 clat percentiles (usec): 00:31:54.495 | 1.00th=[41681], 5.00th=[42206], 10.00th=[42206], 20.00th=[42206], 00:31:54.495 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:31:54.495 | 70.00th=[43254], 80.00th=[43254], 90.00th=[44303], 95.00th=[45876], 00:31:54.495 | 99.00th=[47973], 99.50th=[48497], 99.90th=[49546], 99.95th=[49546], 00:31:54.495 | 99.99th=[49546] 00:31:54.495 bw ( KiB/s): min= 1280, max= 1536, per=4.18%, avg=1472.00, stdev=77.69, samples=20 00:31:54.495 iops : min= 320, max= 384, avg=368.00, stdev=19.42, samples=20 00:31:54.495 lat (msec) : 50=100.00% 00:31:54.495 cpu : usr=96.03%, sys=2.56%, ctx=190, majf=0, minf=28 00:31:54.495 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:54.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.495 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.495 issued rwts: total=3694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.495 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:54.495 filename2: (groupid=0, jobs=1): err= 0: pid=2186813: Wed Jul 24 20:25:57 2024 00:31:54.495 read: IOPS=367, BW=1469KiB/s (1504kB/s)(14.4MiB/10019msec) 00:31:54.495 slat (usec): min=5, max=129, avg=34.00, stdev=13.91 00:31:54.495 clat (usec): min=37905, max=76250, avg=43269.46, stdev=2451.57 00:31:54.495 lat (usec): min=37925, max=76280, avg=43303.45, stdev=2452.31 00:31:54.495 clat percentiles (usec): 00:31:54.495 | 1.00th=[42206], 5.00th=[42206], 10.00th=[42206], 20.00th=[42730], 00:31:54.495 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:31:54.495 | 70.00th=[43254], 80.00th=[43254], 90.00th=[44303], 95.00th=[45351], 00:31:54.495 | 99.00th=[48497], 99.50th=[49546], 99.90th=[76022], 99.95th=[76022], 00:31:54.495 | 99.99th=[76022] 00:31:54.495 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1465.60, stdev=77.42, samples=20 00:31:54.495 iops : min= 320, max= 384, avg=366.40, stdev=19.35, samples=20 00:31:54.495 lat (msec) : 50=99.57%, 100=0.43% 00:31:54.495 cpu : usr=97.72%, sys=1.73%, ctx=21, majf=0, minf=19 00:31:54.495 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:54.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.495 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.495 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.495 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:54.495 00:31:54.495 Run status group 0 (all jobs): 00:31:54.495 READ: bw=34.4MiB/s (36.0MB/s), 1464KiB/s-1486KiB/s (1499kB/s-1521kB/s), io=346MiB (363MB), run=10006-10066msec 00:31:54.495 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:54.495 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:54.495 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:54.495 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:54.495 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:54.495 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:54.495 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.495 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:54.495 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.495 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:54.495 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.495 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:54.495 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.495 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:54.496 bdev_null0 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:54.496 [2024-07-24 20:25:57.708850] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:54.496 bdev_null1 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:54.496 { 00:31:54.496 "params": { 00:31:54.496 "name": "Nvme$subsystem", 00:31:54.496 "trtype": "$TEST_TRANSPORT", 00:31:54.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:54.496 "adrfam": "ipv4", 00:31:54.496 "trsvcid": "$NVMF_PORT", 00:31:54.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:54.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:54.496 "hdgst": ${hdgst:-false}, 00:31:54.496 "ddgst": ${ddgst:-false} 00:31:54.496 }, 00:31:54.496 "method": "bdev_nvme_attach_controller" 00:31:54.496 } 00:31:54.496 EOF 00:31:54.496 )") 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:54.496 { 00:31:54.496 "params": { 00:31:54.496 "name": "Nvme$subsystem", 00:31:54.496 "trtype": "$TEST_TRANSPORT", 00:31:54.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:54.496 "adrfam": "ipv4", 00:31:54.496 "trsvcid": "$NVMF_PORT", 00:31:54.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:54.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:54.496 "hdgst": ${hdgst:-false}, 00:31:54.496 "ddgst": ${ddgst:-false} 00:31:54.496 }, 00:31:54.496 "method": "bdev_nvme_attach_controller" 00:31:54.496 } 00:31:54.496 EOF 00:31:54.496 )") 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:54.496 20:25:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:54.496 "params": { 00:31:54.496 "name": "Nvme0", 00:31:54.497 "trtype": "tcp", 00:31:54.497 "traddr": "10.0.0.2", 00:31:54.497 "adrfam": "ipv4", 00:31:54.497 "trsvcid": "4420", 00:31:54.497 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:54.497 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:54.497 "hdgst": false, 00:31:54.497 "ddgst": false 00:31:54.497 }, 00:31:54.497 "method": "bdev_nvme_attach_controller" 00:31:54.497 },{ 00:31:54.497 "params": { 00:31:54.497 "name": "Nvme1", 00:31:54.497 "trtype": "tcp", 00:31:54.497 "traddr": "10.0.0.2", 00:31:54.497 "adrfam": "ipv4", 00:31:54.497 "trsvcid": "4420", 00:31:54.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:54.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:54.497 "hdgst": false, 00:31:54.497 "ddgst": false 00:31:54.497 }, 00:31:54.497 "method": "bdev_nvme_attach_controller" 00:31:54.497 }' 00:31:54.497 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:54.497 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:54.497 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:54.497 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:54.497 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:54.497 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:54.497 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:54.497 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:54.497 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:54.497 20:25:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:54.497 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:54.497 ... 00:31:54.497 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:54.497 ... 00:31:54.497 fio-3.35 00:31:54.497 Starting 4 threads 00:31:54.497 EAL: No free 2048 kB hugepages reported on node 1 00:32:01.059 00:32:01.059 filename0: (groupid=0, jobs=1): err= 0: pid=2188198: Wed Jul 24 20:26:04 2024 00:32:01.059 read: IOPS=1396, BW=10.9MiB/s (11.4MB/s)(54.6MiB/5004msec) 00:32:01.059 slat (nsec): min=4472, max=82033, avg=28647.63, stdev=9870.00 00:32:01.059 clat (usec): min=1170, max=13885, avg=5626.72, stdev=665.37 00:32:01.059 lat (usec): min=1195, max=13899, avg=5655.37, stdev=663.01 00:32:01.059 clat percentiles (usec): 00:32:01.059 | 1.00th=[ 4424], 5.00th=[ 5080], 10.00th=[ 5276], 20.00th=[ 5407], 00:32:01.059 | 30.00th=[ 5473], 40.00th=[ 5473], 50.00th=[ 5538], 60.00th=[ 5604], 00:32:01.059 | 70.00th=[ 5669], 80.00th=[ 5735], 90.00th=[ 5800], 95.00th=[ 6325], 00:32:01.059 | 99.00th=[ 8717], 99.50th=[ 9241], 99.90th=[11863], 99.95th=[11863], 00:32:01.059 | 99.99th=[13829] 00:32:01.059 bw ( KiB/s): min= 9884, max=11552, per=25.00%, avg=11169.20, stdev=478.25, samples=10 00:32:01.059 iops : min= 1235, max= 1444, avg=1396.10, stdev=59.93, samples=10 00:32:01.059 lat (msec) : 2=0.06%, 4=0.29%, 10=99.47%, 20=0.19% 00:32:01.059 cpu : usr=94.94%, sys=4.40%, ctx=11, majf=0, minf=9 00:32:01.059 IO depths : 1=0.2%, 2=18.9%, 4=55.0%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:01.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.059 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.059 issued rwts: total=6990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:01.059 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:01.059 filename0: (groupid=0, jobs=1): err= 0: pid=2188199: Wed Jul 24 20:26:04 2024 00:32:01.059 read: IOPS=1393, BW=10.9MiB/s (11.4MB/s)(54.5MiB/5003msec) 00:32:01.059 slat (usec): min=4, max=324, avg=27.58, stdev=13.81 00:32:01.059 clat (usec): min=1002, max=14764, avg=5644.63, stdev=754.15 00:32:01.059 lat (usec): min=1020, max=14778, avg=5672.21, stdev=752.47 00:32:01.059 clat percentiles (usec): 00:32:01.059 | 1.00th=[ 4228], 5.00th=[ 5145], 10.00th=[ 5342], 20.00th=[ 5407], 00:32:01.059 | 30.00th=[ 5473], 40.00th=[ 5538], 50.00th=[ 5538], 60.00th=[ 5604], 00:32:01.059 | 70.00th=[ 5669], 80.00th=[ 5735], 90.00th=[ 5866], 95.00th=[ 6652], 00:32:01.059 | 99.00th=[ 8979], 99.50th=[ 9372], 99.90th=[10945], 99.95th=[11863], 00:32:01.059 | 99.99th=[14746] 00:32:01.059 bw ( KiB/s): min= 9904, max=11440, per=24.94%, avg=11140.80, stdev=449.65, samples=10 00:32:01.059 iops : min= 1238, max= 1430, avg=1392.60, stdev=56.21, samples=10 00:32:01.059 lat (msec) : 2=0.29%, 4=0.62%, 10=98.90%, 20=0.20% 00:32:01.059 cpu : usr=93.90%, sys=5.52%, ctx=19, majf=0, minf=9 00:32:01.059 IO depths : 1=0.1%, 2=11.1%, 4=63.2%, 8=25.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:01.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.059 complete : 0=0.0%, 4=90.7%, 8=9.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.059 issued rwts: total=6971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:01.059 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:01.059 filename1: (groupid=0, jobs=1): err= 0: pid=2188200: Wed Jul 24 20:26:04 2024 00:32:01.059 read: IOPS=1401, BW=10.9MiB/s (11.5MB/s)(54.8MiB/5001msec) 00:32:01.059 slat (nsec): min=4095, max=90537, avg=27908.29, stdev=12245.81 00:32:01.059 clat (usec): min=1917, max=14546, avg=5605.29, stdev=706.29 00:32:01.059 lat (usec): min=1934, max=14560, avg=5633.20, stdev=704.38 00:32:01.059 clat percentiles (usec): 00:32:01.059 | 1.00th=[ 4228], 5.00th=[ 5014], 10.00th=[ 5276], 20.00th=[ 5407], 00:32:01.059 | 30.00th=[ 5473], 40.00th=[ 5473], 50.00th=[ 5538], 60.00th=[ 5604], 00:32:01.059 | 70.00th=[ 5604], 80.00th=[ 5669], 90.00th=[ 5800], 95.00th=[ 6063], 00:32:01.059 | 99.00th=[ 8979], 99.50th=[ 9372], 99.90th=[12780], 99.95th=[12780], 00:32:01.059 | 99.99th=[14484] 00:32:01.059 bw ( KiB/s): min= 9840, max=11728, per=25.10%, avg=11214.22, stdev=538.47, samples=9 00:32:01.059 iops : min= 1230, max= 1466, avg=1401.78, stdev=67.31, samples=9 00:32:01.059 lat (msec) : 2=0.01%, 4=0.57%, 10=99.10%, 20=0.31% 00:32:01.059 cpu : usr=94.76%, sys=4.46%, ctx=15, majf=0, minf=9 00:32:01.059 IO depths : 1=0.2%, 2=19.3%, 4=54.8%, 8=25.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:01.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.059 complete : 0=0.0%, 4=90.7%, 8=9.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.059 issued rwts: total=7008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:01.059 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:01.059 filename1: (groupid=0, jobs=1): err= 0: pid=2188201: Wed Jul 24 20:26:04 2024 00:32:01.059 read: IOPS=1394, BW=10.9MiB/s (11.4MB/s)(54.5MiB/5002msec) 00:32:01.059 slat (usec): min=4, max=300, avg=29.75, stdev=22.81 00:32:01.059 clat (usec): min=1621, max=17697, avg=5615.57, stdev=851.98 00:32:01.059 lat (usec): min=1639, max=17710, avg=5645.33, stdev=849.87 00:32:01.059 clat percentiles (usec): 00:32:01.059 | 1.00th=[ 4228], 5.00th=[ 5014], 10.00th=[ 5276], 20.00th=[ 5342], 00:32:01.059 | 30.00th=[ 5407], 40.00th=[ 5473], 50.00th=[ 5538], 60.00th=[ 5538], 00:32:01.059 | 70.00th=[ 5604], 80.00th=[ 5669], 90.00th=[ 5800], 95.00th=[ 6194], 00:32:01.059 | 99.00th=[ 9372], 99.50th=[10421], 99.90th=[15401], 99.95th=[16188], 00:32:01.059 | 99.99th=[17695] 00:32:01.059 bw ( KiB/s): min= 9266, max=11536, per=24.96%, avg=11149.00, stdev=671.88, samples=10 00:32:01.059 iops : min= 1158, max= 1442, avg=1393.60, stdev=84.06, samples=10 00:32:01.059 lat (msec) : 2=0.16%, 4=0.56%, 10=98.65%, 20=0.63% 00:32:01.059 cpu : usr=79.38%, sys=10.44%, ctx=95, majf=0, minf=9 00:32:01.059 IO depths : 1=0.7%, 2=21.8%, 4=52.5%, 8=24.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:01.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.059 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.059 issued rwts: total=6975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:01.059 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:01.059 00:32:01.059 Run status group 0 (all jobs): 00:32:01.059 READ: bw=43.6MiB/s (45.7MB/s), 10.9MiB/s-10.9MiB/s (11.4MB/s-11.5MB/s), io=218MiB (229MB), run=5001-5004msec 00:32:01.059 20:26:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:01.059 20:26:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:01.059 20:26:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:01.059 20:26:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:01.059 20:26:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:01.059 20:26:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:01.059 20:26:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.059 20:26:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:01.059 20:26:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.059 20:26:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:01.059 20:26:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.059 20:26:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:01.059 20:26:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.059 20:26:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:01.060 20:26:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:01.060 20:26:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:01.060 20:26:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:01.060 20:26:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.060 20:26:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:01.060 20:26:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.060 20:26:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:01.060 20:26:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.060 20:26:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:01.060 20:26:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.060 00:32:01.060 real 0m25.516s 00:32:01.060 user 4m30.226s 00:32:01.060 sys 0m8.882s 00:32:01.060 20:26:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:01.060 20:26:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:01.060 ************************************ 00:32:01.060 END TEST fio_dif_rand_params 00:32:01.060 ************************************ 00:32:01.060 20:26:04 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:01.060 20:26:04 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:01.060 20:26:04 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:01.060 20:26:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:01.060 ************************************ 00:32:01.060 START TEST fio_dif_digest 00:32:01.060 ************************************ 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:01.060 bdev_null0 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:01.060 [2024-07-24 20:26:04.681825] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:01.060 { 00:32:01.060 "params": { 00:32:01.060 "name": "Nvme$subsystem", 00:32:01.060 "trtype": "$TEST_TRANSPORT", 00:32:01.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:01.060 "adrfam": "ipv4", 00:32:01.060 "trsvcid": "$NVMF_PORT", 00:32:01.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:01.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:01.060 "hdgst": ${hdgst:-false}, 00:32:01.060 "ddgst": ${ddgst:-false} 00:32:01.060 }, 00:32:01.060 "method": "bdev_nvme_attach_controller" 00:32:01.060 } 00:32:01.060 EOF 00:32:01.060 )") 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:01.060 "params": { 00:32:01.060 "name": "Nvme0", 00:32:01.060 "trtype": "tcp", 00:32:01.060 "traddr": "10.0.0.2", 00:32:01.060 "adrfam": "ipv4", 00:32:01.060 "trsvcid": "4420", 00:32:01.060 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:01.060 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:01.060 "hdgst": true, 00:32:01.060 "ddgst": true 00:32:01.060 }, 00:32:01.060 "method": "bdev_nvme_attach_controller" 00:32:01.060 }' 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:01.060 20:26:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:01.319 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:01.319 ... 00:32:01.319 fio-3.35 00:32:01.319 Starting 3 threads 00:32:01.319 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.523 00:32:13.523 filename0: (groupid=0, jobs=1): err= 0: pid=2189080: Wed Jul 24 20:26:15 2024 00:32:13.523 read: IOPS=158, BW=19.8MiB/s (20.8MB/s)(199MiB/10045msec) 00:32:13.523 slat (nsec): min=5839, max=51640, avg=19788.58, stdev=3057.09 00:32:13.523 clat (usec): min=11118, max=62300, avg=18878.01, stdev=3339.32 00:32:13.523 lat (usec): min=11137, max=62321, avg=18897.80, stdev=3339.38 00:32:13.523 clat percentiles (usec): 00:32:13.523 | 1.00th=[13042], 5.00th=[16188], 10.00th=[16909], 20.00th=[17433], 00:32:13.523 | 30.00th=[17957], 40.00th=[18482], 50.00th=[18744], 60.00th=[19006], 00:32:13.523 | 70.00th=[19530], 80.00th=[19792], 90.00th=[20579], 95.00th=[21365], 00:32:13.523 | 99.00th=[22938], 99.50th=[52691], 99.90th=[62129], 99.95th=[62129], 00:32:13.523 | 99.99th=[62129] 00:32:13.523 bw ( KiB/s): min=18688, max=21248, per=34.54%, avg=20339.20, stdev=677.19, samples=20 00:32:13.523 iops : min= 146, max= 166, avg=158.90, stdev= 5.29, samples=20 00:32:13.523 lat (msec) : 20=82.10%, 50=17.40%, 100=0.50% 00:32:13.523 cpu : usr=91.76%, sys=7.65%, ctx=17, majf=0, minf=143 00:32:13.523 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:13.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.523 issued rwts: total=1592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.523 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:13.523 filename0: (groupid=0, jobs=1): err= 0: pid=2189081: Wed Jul 24 20:26:15 2024 00:32:13.523 read: IOPS=147, BW=18.5MiB/s (19.4MB/s)(186MiB/10048msec) 00:32:13.523 slat (nsec): min=6109, max=51369, avg=19783.82, stdev=2427.85 00:32:13.523 clat (usec): min=13071, max=64005, avg=20217.96, stdev=3791.29 00:32:13.523 lat (usec): min=13090, max=64026, avg=20237.75, stdev=3791.30 00:32:13.523 clat percentiles (usec): 00:32:13.523 | 1.00th=[14746], 5.00th=[17957], 10.00th=[18220], 20.00th=[18744], 00:32:13.523 | 30.00th=[19268], 40.00th=[19530], 50.00th=[19792], 60.00th=[20317], 00:32:13.523 | 70.00th=[20579], 80.00th=[21103], 90.00th=[21890], 95.00th=[22414], 00:32:13.523 | 99.00th=[24249], 99.50th=[60556], 99.90th=[63177], 99.95th=[64226], 00:32:13.523 | 99.99th=[64226] 00:32:13.523 bw ( KiB/s): min=17408, max=20224, per=32.26%, avg=18997.00, stdev=780.47, samples=20 00:32:13.523 iops : min= 136, max= 158, avg=148.40, stdev= 6.11, samples=20 00:32:13.523 lat (msec) : 20=52.86%, 50=46.47%, 100=0.67% 00:32:13.523 cpu : usr=91.73%, sys=7.70%, ctx=22, majf=0, minf=140 00:32:13.523 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:13.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.523 issued rwts: total=1487,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.523 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:13.523 filename0: (groupid=0, jobs=1): err= 0: pid=2189082: Wed Jul 24 20:26:15 2024 00:32:13.523 read: IOPS=153, BW=19.2MiB/s (20.1MB/s)(193MiB/10047msec) 00:32:13.523 slat (nsec): min=6602, max=54472, avg=19818.09, stdev=2680.44 00:32:13.523 clat (usec): min=10698, max=62603, avg=19466.52, stdev=2819.35 00:32:13.523 lat (usec): min=10717, max=62623, avg=19486.33, stdev=2819.49 00:32:13.523 clat percentiles (usec): 00:32:13.523 | 1.00th=[13042], 5.00th=[16581], 10.00th=[17433], 20.00th=[18220], 00:32:13.523 | 30.00th=[18744], 40.00th=[19006], 50.00th=[19530], 60.00th=[19792], 00:32:13.523 | 70.00th=[20317], 80.00th=[20579], 90.00th=[21365], 95.00th=[21627], 00:32:13.523 | 99.00th=[22938], 99.50th=[23462], 99.90th=[62129], 99.95th=[62653], 00:32:13.523 | 99.99th=[62653] 00:32:13.523 bw ( KiB/s): min=17920, max=20992, per=33.51%, avg=19737.60, stdev=632.00, samples=20 00:32:13.523 iops : min= 140, max= 164, avg=154.20, stdev= 4.94, samples=20 00:32:13.523 lat (msec) : 20=65.09%, 50=34.65%, 100=0.26% 00:32:13.523 cpu : usr=92.37%, sys=7.06%, ctx=22, majf=0, minf=112 00:32:13.523 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:13.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.523 issued rwts: total=1544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.523 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:13.523 00:32:13.523 Run status group 0 (all jobs): 00:32:13.523 READ: bw=57.5MiB/s (60.3MB/s), 18.5MiB/s-19.8MiB/s (19.4MB/s-20.8MB/s), io=578MiB (606MB), run=10045-10048msec 00:32:13.523 20:26:16 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:13.523 20:26:16 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:13.523 20:26:16 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:13.523 20:26:16 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:13.523 20:26:16 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:13.523 20:26:16 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:13.523 20:26:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.523 20:26:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:13.523 20:26:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.523 20:26:16 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:13.523 20:26:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.523 20:26:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:13.523 20:26:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.523 00:32:13.523 real 0m11.462s 00:32:13.523 user 0m29.110s 00:32:13.523 sys 0m2.637s 00:32:13.523 20:26:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:13.523 20:26:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:13.523 ************************************ 00:32:13.523 END TEST fio_dif_digest 00:32:13.523 ************************************ 00:32:13.523 20:26:16 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:13.523 20:26:16 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:13.523 20:26:16 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:13.523 20:26:16 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:32:13.523 20:26:16 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:13.523 20:26:16 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:32:13.523 20:26:16 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:13.523 20:26:16 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:13.523 rmmod nvme_tcp 00:32:13.523 rmmod nvme_fabrics 00:32:13.523 rmmod nvme_keyring 00:32:13.523 20:26:16 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:13.523 20:26:16 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:32:13.523 20:26:16 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:32:13.523 20:26:16 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2182761 ']' 00:32:13.523 20:26:16 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2182761 00:32:13.523 20:26:16 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 2182761 ']' 00:32:13.523 20:26:16 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 2182761 00:32:13.523 20:26:16 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:32:13.523 20:26:16 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:13.523 20:26:16 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2182761 00:32:13.523 20:26:16 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:13.523 20:26:16 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:13.523 20:26:16 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2182761' 00:32:13.523 killing process with pid 2182761 00:32:13.523 20:26:16 nvmf_dif -- common/autotest_common.sh@969 -- # kill 2182761 00:32:13.523 20:26:16 nvmf_dif -- common/autotest_common.sh@974 -- # wait 2182761 00:32:13.523 20:26:16 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:13.523 20:26:16 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:14.459 Waiting for block devices as requested 00:32:14.459 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:32:14.718 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:14.718 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:14.978 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:14.978 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:14.978 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:15.237 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:15.237 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:15.237 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:15.237 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:15.496 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:15.496 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:15.496 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:15.755 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:15.755 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:15.755 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:15.755 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:16.013 20:26:19 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:16.013 20:26:19 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:16.013 20:26:19 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:16.013 20:26:19 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:16.013 20:26:19 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.013 20:26:19 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:16.013 20:26:19 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.915 20:26:21 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:17.915 00:32:17.915 real 1m11.407s 00:32:17.915 user 6m30.214s 00:32:17.915 sys 0m23.027s 00:32:17.915 20:26:21 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:17.915 20:26:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:17.915 ************************************ 00:32:17.915 END TEST nvmf_dif 00:32:17.915 ************************************ 00:32:18.175 20:26:21 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:18.175 20:26:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:18.175 20:26:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:18.175 20:26:21 -- common/autotest_common.sh@10 -- # set +x 00:32:18.175 ************************************ 00:32:18.175 START TEST nvmf_abort_qd_sizes 00:32:18.175 ************************************ 00:32:18.175 20:26:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:18.175 * Looking for test storage... 00:32:18.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:18.175 20:26:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:18.175 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:18.175 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:18.175 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:18.175 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:18.175 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:18.175 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:18.175 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:18.175 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:18.175 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:18.175 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:32:18.176 20:26:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:20.712 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:20.712 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:20.712 Found net devices under 0000:84:00.0: cvl_0_0 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:20.712 Found net devices under 0000:84:00.1: cvl_0_1 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:20.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:20.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:32:20.712 00:32:20.712 --- 10.0.0.2 ping statistics --- 00:32:20.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.712 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:20.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:20.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:32:20.712 00:32:20.712 --- 10.0.0.1 ping statistics --- 00:32:20.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.712 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:20.712 20:26:24 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:22.647 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:22.647 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:22.647 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:22.647 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:22.647 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:22.647 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:22.647 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:22.647 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:22.647 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:22.647 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:22.647 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:22.647 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:22.647 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:22.647 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:22.647 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:22.647 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:23.214 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:32:23.472 20:26:27 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:23.472 20:26:27 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:23.472 20:26:27 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:23.472 20:26:27 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:23.472 20:26:27 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:23.472 20:26:27 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:23.472 20:26:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:23.472 20:26:27 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:23.472 20:26:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:23.472 20:26:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:23.472 20:26:27 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2194028 00:32:23.472 20:26:27 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:23.472 20:26:27 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2194028 00:32:23.472 20:26:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 2194028 ']' 00:32:23.472 20:26:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:23.472 20:26:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:23.472 20:26:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:23.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:23.472 20:26:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:23.472 20:26:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:23.731 [2024-07-24 20:26:27.269507] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:32:23.731 [2024-07-24 20:26:27.269678] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:23.731 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.731 [2024-07-24 20:26:27.423161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:23.989 [2024-07-24 20:26:27.635596] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:23.989 [2024-07-24 20:26:27.635666] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:23.989 [2024-07-24 20:26:27.635686] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:23.989 [2024-07-24 20:26:27.635703] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:23.989 [2024-07-24 20:26:27.635740] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:23.989 [2024-07-24 20:26:27.635986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:23.989 [2024-07-24 20:26:27.636047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:23.989 [2024-07-24 20:26:27.636108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:23.989 [2024-07-24 20:26:27.636113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.555 20:26:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:24.555 20:26:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:32:24.555 20:26:28 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:24.555 20:26:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:24.555 20:26:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:24.555 20:26:28 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:24.555 20:26:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:24.555 20:26:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:24.555 20:26:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:24.555 20:26:28 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:32:24.555 20:26:28 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:32:24.555 20:26:28 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:82:00.0 ]] 00:32:24.555 20:26:28 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:24.555 20:26:28 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:24.555 20:26:28 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:32:24.555 20:26:28 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:32:24.812 20:26:28 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:24.813 20:26:28 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:24.813 20:26:28 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:32:24.813 20:26:28 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:82:00.0 00:32:24.813 20:26:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:24.813 20:26:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:82:00.0 00:32:24.813 20:26:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:24.813 20:26:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:24.813 20:26:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:24.813 20:26:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:24.813 ************************************ 00:32:24.813 START TEST spdk_target_abort 00:32:24.813 ************************************ 00:32:24.813 20:26:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:32:24.813 20:26:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:24.813 20:26:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:32:24.813 20:26:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.813 20:26:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:28.094 spdk_targetn1 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:28.094 [2024-07-24 20:26:31.231677] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:28.094 [2024-07-24 20:26:31.268204] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:28.094 20:26:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:28.094 EAL: No free 2048 kB hugepages reported on node 1 00:32:31.378 Initializing NVMe Controllers 00:32:31.378 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:31.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:31.378 Initialization complete. Launching workers. 00:32:31.378 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8224, failed: 0 00:32:31.378 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1277, failed to submit 6947 00:32:31.378 success 799, unsuccess 478, failed 0 00:32:31.378 20:26:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:31.378 20:26:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:31.378 EAL: No free 2048 kB hugepages reported on node 1 00:32:34.661 Initializing NVMe Controllers 00:32:34.661 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:34.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:34.661 Initialization complete. Launching workers. 00:32:34.661 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8301, failed: 0 00:32:34.661 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1225, failed to submit 7076 00:32:34.661 success 356, unsuccess 869, failed 0 00:32:34.661 20:26:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:34.661 20:26:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:34.661 EAL: No free 2048 kB hugepages reported on node 1 00:32:37.943 Initializing NVMe Controllers 00:32:37.943 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:37.943 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:37.943 Initialization complete. Launching workers. 00:32:37.943 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27732, failed: 0 00:32:37.943 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2703, failed to submit 25029 00:32:37.943 success 264, unsuccess 2439, failed 0 00:32:37.943 20:26:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:37.943 20:26:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.943 20:26:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:37.943 20:26:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.943 20:26:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:37.943 20:26:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.943 20:26:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:38.877 20:26:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.877 20:26:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2194028 00:32:38.877 20:26:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 2194028 ']' 00:32:38.877 20:26:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 2194028 00:32:38.877 20:26:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:32:38.877 20:26:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:38.877 20:26:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2194028 00:32:38.877 20:26:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:38.877 20:26:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:38.877 20:26:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2194028' 00:32:38.877 killing process with pid 2194028 00:32:38.877 20:26:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 2194028 00:32:38.877 20:26:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 2194028 00:32:39.443 00:32:39.443 real 0m14.615s 00:32:39.443 user 0m57.244s 00:32:39.443 sys 0m2.899s 00:32:39.443 20:26:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:39.443 20:26:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:39.443 ************************************ 00:32:39.443 END TEST spdk_target_abort 00:32:39.443 ************************************ 00:32:39.443 20:26:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:39.443 20:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:39.443 20:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:39.443 20:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:39.443 ************************************ 00:32:39.443 START TEST kernel_target_abort 00:32:39.443 ************************************ 00:32:39.443 20:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:32:39.443 20:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:39.443 20:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:32:39.443 20:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.443 20:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.443 20:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.443 20:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.443 20:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.443 20:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.443 20:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.443 20:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.443 20:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.443 20:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:39.443 20:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:39.443 20:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:39.443 20:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:39.443 20:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:39.443 20:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:39.443 20:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:32:39.443 20:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:39.443 20:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:39.443 20:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:39.443 20:26:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:40.828 Waiting for block devices as requested 00:32:41.122 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:32:41.122 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:41.122 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:41.393 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:41.393 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:41.393 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:41.652 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:41.652 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:41.652 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:41.652 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:41.910 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:41.910 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:41.910 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:42.169 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:42.169 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:42.169 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:42.169 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:42.427 No valid GPT data, bailing 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:32:42.427 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:42.685 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:32:42.685 00:32:42.685 Discovery Log Number of Records 2, Generation counter 2 00:32:42.685 =====Discovery Log Entry 0====== 00:32:42.685 trtype: tcp 00:32:42.685 adrfam: ipv4 00:32:42.685 subtype: current discovery subsystem 00:32:42.685 treq: not specified, sq flow control disable supported 00:32:42.685 portid: 1 00:32:42.685 trsvcid: 4420 00:32:42.685 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:42.685 traddr: 10.0.0.1 00:32:42.685 eflags: none 00:32:42.685 sectype: none 00:32:42.685 =====Discovery Log Entry 1====== 00:32:42.685 trtype: tcp 00:32:42.685 adrfam: ipv4 00:32:42.685 subtype: nvme subsystem 00:32:42.685 treq: not specified, sq flow control disable supported 00:32:42.685 portid: 1 00:32:42.685 trsvcid: 4420 00:32:42.685 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:42.685 traddr: 10.0.0.1 00:32:42.685 eflags: none 00:32:42.685 sectype: none 00:32:42.685 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:42.685 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:42.685 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:42.685 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:42.685 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:42.685 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:42.685 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:42.685 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:42.685 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:42.685 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:42.685 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:42.685 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:42.686 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:42.686 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:42.686 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:42.686 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:42.686 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:42.686 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:42.686 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:42.686 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:42.686 20:26:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:42.686 EAL: No free 2048 kB hugepages reported on node 1 00:32:45.970 Initializing NVMe Controllers 00:32:45.971 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:45.971 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:45.971 Initialization complete. Launching workers. 00:32:45.971 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 19326, failed: 0 00:32:45.971 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19326, failed to submit 0 00:32:45.971 success 0, unsuccess 19326, failed 0 00:32:45.971 20:26:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:45.971 20:26:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:45.971 EAL: No free 2048 kB hugepages reported on node 1 00:32:49.256 Initializing NVMe Controllers 00:32:49.256 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:49.256 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:49.256 Initialization complete. Launching workers. 00:32:49.256 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35562, failed: 0 00:32:49.256 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 8978, failed to submit 26584 00:32:49.256 success 0, unsuccess 8978, failed 0 00:32:49.256 20:26:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:49.256 20:26:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:49.256 EAL: No free 2048 kB hugepages reported on node 1 00:32:52.538 Initializing NVMe Controllers 00:32:52.538 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:52.538 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:52.538 Initialization complete. Launching workers. 00:32:52.538 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34254, failed: 0 00:32:52.538 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 8550, failed to submit 25704 00:32:52.538 success 0, unsuccess 8550, failed 0 00:32:52.538 20:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:52.538 20:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:52.538 20:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:52.538 20:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:52.538 20:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:52.538 20:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:52.538 20:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:52.538 20:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:52.538 20:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:52.538 20:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:53.913 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:53.913 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:53.913 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:53.913 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:53.913 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:53.913 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:53.913 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:53.913 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:53.913 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:53.913 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:53.913 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:53.913 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:53.913 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:53.913 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:53.913 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:53.913 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:54.849 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:32:54.849 00:32:54.849 real 0m15.465s 00:32:54.849 user 0m6.063s 00:32:54.849 sys 0m4.139s 00:32:54.849 20:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:54.849 20:26:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:54.849 ************************************ 00:32:54.849 END TEST kernel_target_abort 00:32:54.849 ************************************ 00:32:54.849 20:26:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:54.849 20:26:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:54.849 20:26:58 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:54.849 20:26:58 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:54.849 20:26:58 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:54.849 20:26:58 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:54.849 20:26:58 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:54.849 20:26:58 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:54.849 rmmod nvme_tcp 00:32:54.849 rmmod nvme_fabrics 00:32:54.849 rmmod nvme_keyring 00:32:54.849 20:26:58 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:54.849 20:26:58 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:54.849 20:26:58 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:54.849 20:26:58 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2194028 ']' 00:32:54.849 20:26:58 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2194028 00:32:54.849 20:26:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 2194028 ']' 00:32:54.849 20:26:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 2194028 00:32:54.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2194028) - No such process 00:32:54.849 20:26:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 2194028 is not found' 00:32:54.849 Process with pid 2194028 is not found 00:32:54.849 20:26:58 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:54.849 20:26:58 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:56.226 Waiting for block devices as requested 00:32:56.226 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:32:56.485 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:56.748 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:56.748 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:56.748 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:57.006 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:57.006 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:57.006 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:57.006 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:57.266 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:57.266 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:57.266 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:57.524 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:57.524 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:57.525 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:57.784 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:57.784 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:57.784 20:27:01 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:57.784 20:27:01 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:57.784 20:27:01 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:57.784 20:27:01 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:57.784 20:27:01 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.784 20:27:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:57.784 20:27:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:00.343 20:27:03 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:00.343 00:33:00.343 real 0m41.811s 00:33:00.343 user 1m6.106s 00:33:00.343 sys 0m11.538s 00:33:00.343 20:27:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:00.343 20:27:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:00.343 ************************************ 00:33:00.343 END TEST nvmf_abort_qd_sizes 00:33:00.343 ************************************ 00:33:00.343 20:27:03 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:00.343 20:27:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:00.343 20:27:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:00.343 20:27:03 -- common/autotest_common.sh@10 -- # set +x 00:33:00.343 ************************************ 00:33:00.343 START TEST keyring_file 00:33:00.343 ************************************ 00:33:00.343 20:27:03 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:00.343 * Looking for test storage... 00:33:00.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:00.343 20:27:03 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:00.343 20:27:03 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:00.343 20:27:03 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:00.343 20:27:03 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:00.343 20:27:03 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:00.343 20:27:03 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:00.343 20:27:03 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:00.343 20:27:03 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:00.343 20:27:03 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:00.343 20:27:03 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:00.343 20:27:03 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:00.343 20:27:03 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:00.343 20:27:03 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:00.343 20:27:03 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:33:00.343 20:27:03 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:33:00.343 20:27:03 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:00.343 20:27:03 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:00.343 20:27:03 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:00.343 20:27:03 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:00.343 20:27:03 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:00.343 20:27:03 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:00.343 20:27:03 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:00.343 20:27:03 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:00.343 20:27:03 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.343 20:27:03 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.343 20:27:03 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.343 20:27:03 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:00.343 20:27:03 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.343 20:27:03 keyring_file -- nvmf/common.sh@47 -- # : 0 00:33:00.343 20:27:03 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:00.344 20:27:03 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:00.344 20:27:03 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:00.344 20:27:03 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:00.344 20:27:03 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:00.344 20:27:03 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:00.344 20:27:03 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:00.344 20:27:03 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:00.344 20:27:03 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:00.344 20:27:03 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:00.344 20:27:03 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:00.344 20:27:03 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:00.344 20:27:03 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:00.344 20:27:03 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:00.344 20:27:03 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:00.344 20:27:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:00.344 20:27:03 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:00.344 20:27:03 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:00.344 20:27:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:00.344 20:27:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:00.344 20:27:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.rciR14PaiT 00:33:00.344 20:27:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:00.344 20:27:03 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:00.344 20:27:03 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:00.344 20:27:03 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:00.344 20:27:03 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:00.344 20:27:03 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:00.344 20:27:03 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:00.344 20:27:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rciR14PaiT 00:33:00.344 20:27:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.rciR14PaiT 00:33:00.344 20:27:03 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.rciR14PaiT 00:33:00.344 20:27:03 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:00.344 20:27:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:00.344 20:27:03 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:00.344 20:27:03 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:00.344 20:27:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:00.344 20:27:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:00.344 20:27:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YjcJzt9atg 00:33:00.344 20:27:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:00.344 20:27:03 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:00.344 20:27:03 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:00.344 20:27:03 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:00.344 20:27:03 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:00.344 20:27:03 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:00.344 20:27:03 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:00.344 20:27:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YjcJzt9atg 00:33:00.344 20:27:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YjcJzt9atg 00:33:00.344 20:27:03 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.YjcJzt9atg 00:33:00.344 20:27:03 keyring_file -- keyring/file.sh@30 -- # tgtpid=2200044 00:33:00.344 20:27:03 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:00.344 20:27:03 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2200044 00:33:00.344 20:27:03 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2200044 ']' 00:33:00.344 20:27:03 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:00.344 20:27:03 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:00.344 20:27:03 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:00.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:00.344 20:27:03 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:00.344 20:27:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:00.344 [2024-07-24 20:27:03.974839] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:33:00.344 [2024-07-24 20:27:03.975023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2200044 ] 00:33:00.344 EAL: No free 2048 kB hugepages reported on node 1 00:33:00.344 [2024-07-24 20:27:04.108784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.604 [2024-07-24 20:27:04.313701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:33:01.542 20:27:05 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:01.542 [2024-07-24 20:27:05.220450] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:01.542 null0 00:33:01.542 [2024-07-24 20:27:05.253745] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:01.542 [2024-07-24 20:27:05.254660] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:01.542 [2024-07-24 20:27:05.261697] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.542 20:27:05 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:01.542 [2024-07-24 20:27:05.273723] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:01.542 request: 00:33:01.542 { 00:33:01.542 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:01.542 "secure_channel": false, 00:33:01.542 "listen_address": { 00:33:01.542 "trtype": "tcp", 00:33:01.542 "traddr": "127.0.0.1", 00:33:01.542 "trsvcid": "4420" 00:33:01.542 }, 00:33:01.542 "method": "nvmf_subsystem_add_listener", 00:33:01.542 "req_id": 1 00:33:01.542 } 00:33:01.542 Got JSON-RPC error response 00:33:01.542 response: 00:33:01.542 { 00:33:01.542 "code": -32602, 00:33:01.542 "message": "Invalid parameters" 00:33:01.542 } 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:01.542 20:27:05 keyring_file -- keyring/file.sh@46 -- # bperfpid=2200185 00:33:01.542 20:27:05 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:01.542 20:27:05 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2200185 /var/tmp/bperf.sock 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2200185 ']' 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:01.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:01.542 20:27:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:01.802 [2024-07-24 20:27:05.365667] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:33:01.802 [2024-07-24 20:27:05.365837] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2200185 ] 00:33:01.802 EAL: No free 2048 kB hugepages reported on node 1 00:33:01.802 [2024-07-24 20:27:05.470654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.060 [2024-07-24 20:27:05.617550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:02.060 20:27:05 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:02.060 20:27:05 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:33:02.061 20:27:05 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rciR14PaiT 00:33:02.061 20:27:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rciR14PaiT 00:33:02.318 20:27:06 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.YjcJzt9atg 00:33:02.318 20:27:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.YjcJzt9atg 00:33:02.885 20:27:06 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:33:02.885 20:27:06 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:33:02.885 20:27:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:02.885 20:27:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:02.885 20:27:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:03.144 20:27:06 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.rciR14PaiT == \/\t\m\p\/\t\m\p\.\r\c\i\R\1\4\P\a\i\T ]] 00:33:03.144 20:27:06 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:33:03.144 20:27:06 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:03.144 20:27:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:03.144 20:27:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:03.144 20:27:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:03.402 20:27:07 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.YjcJzt9atg == \/\t\m\p\/\t\m\p\.\Y\j\c\J\z\t\9\a\t\g ]] 00:33:03.402 20:27:07 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:33:03.402 20:27:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:03.402 20:27:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:03.402 20:27:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:03.402 20:27:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:03.402 20:27:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:03.968 20:27:07 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:33:03.968 20:27:07 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:33:03.968 20:27:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:03.968 20:27:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:03.968 20:27:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:03.968 20:27:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:03.968 20:27:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:04.226 20:27:07 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:04.226 20:27:07 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:04.226 20:27:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:04.485 [2024-07-24 20:27:08.094291] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:04.485 nvme0n1 00:33:04.485 20:27:08 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:33:04.485 20:27:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:04.485 20:27:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:04.485 20:27:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:04.485 20:27:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:04.485 20:27:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:04.744 20:27:08 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:33:04.744 20:27:08 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:33:04.744 20:27:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:04.744 20:27:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:04.744 20:27:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:04.744 20:27:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:04.744 20:27:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:05.310 20:27:08 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:33:05.310 20:27:08 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:05.310 Running I/O for 1 seconds... 00:33:06.685 00:33:06.685 Latency(us) 00:33:06.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:06.685 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:06.685 nvme0n1 : 1.03 4821.15 18.83 0.00 0.00 26219.75 5170.06 28738.75 00:33:06.685 =================================================================================================================== 00:33:06.685 Total : 4821.15 18.83 0.00 0.00 26219.75 5170.06 28738.75 00:33:06.685 0 00:33:06.685 20:27:10 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:06.685 20:27:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:06.685 20:27:10 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:33:06.685 20:27:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:06.685 20:27:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:06.685 20:27:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:06.686 20:27:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:06.686 20:27:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:06.944 20:27:10 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:33:06.944 20:27:10 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:33:06.944 20:27:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:06.944 20:27:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:06.944 20:27:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:06.944 20:27:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:06.944 20:27:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:07.510 20:27:11 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:07.510 20:27:11 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:07.510 20:27:11 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:07.510 20:27:11 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:07.510 20:27:11 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:07.510 20:27:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:07.510 20:27:11 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:07.510 20:27:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:07.510 20:27:11 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:07.510 20:27:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:07.768 [2024-07-24 20:27:11.410100] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:07.768 [2024-07-24 20:27:11.410541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d147a0 (107): Transport endpoint is not connected 00:33:07.768 [2024-07-24 20:27:11.411532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d147a0 (9): Bad file descriptor 00:33:07.768 [2024-07-24 20:27:11.412530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:07.768 [2024-07-24 20:27:11.412557] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:07.768 [2024-07-24 20:27:11.412577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:07.768 request: 00:33:07.768 { 00:33:07.768 "name": "nvme0", 00:33:07.768 "trtype": "tcp", 00:33:07.768 "traddr": "127.0.0.1", 00:33:07.768 "adrfam": "ipv4", 00:33:07.768 "trsvcid": "4420", 00:33:07.768 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:07.768 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:07.768 "prchk_reftag": false, 00:33:07.768 "prchk_guard": false, 00:33:07.768 "hdgst": false, 00:33:07.768 "ddgst": false, 00:33:07.768 "psk": "key1", 00:33:07.768 "method": "bdev_nvme_attach_controller", 00:33:07.768 "req_id": 1 00:33:07.768 } 00:33:07.768 Got JSON-RPC error response 00:33:07.768 response: 00:33:07.768 { 00:33:07.768 "code": -5, 00:33:07.768 "message": "Input/output error" 00:33:07.768 } 00:33:07.768 20:27:11 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:07.768 20:27:11 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:07.768 20:27:11 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:07.769 20:27:11 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:07.769 20:27:11 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:33:07.769 20:27:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:07.769 20:27:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:07.769 20:27:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:07.769 20:27:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:07.769 20:27:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:08.027 20:27:11 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:33:08.027 20:27:11 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:33:08.027 20:27:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:08.027 20:27:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:08.027 20:27:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:08.027 20:27:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:08.027 20:27:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:08.593 20:27:12 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:08.593 20:27:12 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:33:08.593 20:27:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:09.159 20:27:12 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:33:09.159 20:27:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:09.741 20:27:13 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:33:09.741 20:27:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:09.741 20:27:13 keyring_file -- keyring/file.sh@77 -- # jq length 00:33:10.004 20:27:13 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:33:10.004 20:27:13 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.rciR14PaiT 00:33:10.004 20:27:13 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.rciR14PaiT 00:33:10.004 20:27:13 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:10.004 20:27:13 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.rciR14PaiT 00:33:10.004 20:27:13 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:10.004 20:27:13 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:10.004 20:27:13 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:10.004 20:27:13 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:10.004 20:27:13 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rciR14PaiT 00:33:10.004 20:27:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rciR14PaiT 00:33:10.570 [2024-07-24 20:27:14.098307] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rciR14PaiT': 0100660 00:33:10.570 [2024-07-24 20:27:14.098357] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:10.570 request: 00:33:10.570 { 00:33:10.570 "name": "key0", 00:33:10.570 "path": "/tmp/tmp.rciR14PaiT", 00:33:10.570 "method": "keyring_file_add_key", 00:33:10.570 "req_id": 1 00:33:10.570 } 00:33:10.570 Got JSON-RPC error response 00:33:10.570 response: 00:33:10.570 { 00:33:10.570 "code": -1, 00:33:10.570 "message": "Operation not permitted" 00:33:10.570 } 00:33:10.570 20:27:14 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:10.570 20:27:14 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:10.570 20:27:14 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:10.570 20:27:14 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:10.570 20:27:14 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.rciR14PaiT 00:33:10.570 20:27:14 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rciR14PaiT 00:33:10.570 20:27:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rciR14PaiT 00:33:11.136 20:27:14 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.rciR14PaiT 00:33:11.136 20:27:14 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:33:11.136 20:27:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:11.136 20:27:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:11.136 20:27:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:11.136 20:27:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:11.136 20:27:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:11.395 20:27:15 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:33:11.395 20:27:15 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:11.395 20:27:15 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:11.395 20:27:15 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:11.395 20:27:15 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:11.395 20:27:15 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:11.395 20:27:15 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:11.395 20:27:15 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:11.395 20:27:15 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:11.395 20:27:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:11.962 [2024-07-24 20:27:15.494066] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.rciR14PaiT': No such file or directory 00:33:11.962 [2024-07-24 20:27:15.494115] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:11.962 [2024-07-24 20:27:15.494153] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:11.962 [2024-07-24 20:27:15.494169] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:11.962 [2024-07-24 20:27:15.494185] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:11.962 request: 00:33:11.962 { 00:33:11.962 "name": "nvme0", 00:33:11.962 "trtype": "tcp", 00:33:11.962 "traddr": "127.0.0.1", 00:33:11.962 "adrfam": "ipv4", 00:33:11.962 "trsvcid": "4420", 00:33:11.962 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:11.962 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:11.962 "prchk_reftag": false, 00:33:11.962 "prchk_guard": false, 00:33:11.962 "hdgst": false, 00:33:11.962 "ddgst": false, 00:33:11.962 "psk": "key0", 00:33:11.962 "method": "bdev_nvme_attach_controller", 00:33:11.962 "req_id": 1 00:33:11.962 } 00:33:11.962 Got JSON-RPC error response 00:33:11.962 response: 00:33:11.962 { 00:33:11.962 "code": -19, 00:33:11.962 "message": "No such device" 00:33:11.962 } 00:33:11.962 20:27:15 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:11.962 20:27:15 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:11.962 20:27:15 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:11.962 20:27:15 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:11.962 20:27:15 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:33:11.962 20:27:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:12.220 20:27:15 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:12.220 20:27:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:12.220 20:27:15 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:12.220 20:27:15 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:12.220 20:27:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:12.220 20:27:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:12.220 20:27:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NEwNzjVjqL 00:33:12.220 20:27:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:12.220 20:27:15 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:12.220 20:27:15 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:12.220 20:27:15 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:12.220 20:27:15 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:12.220 20:27:15 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:12.220 20:27:15 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:12.220 20:27:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NEwNzjVjqL 00:33:12.220 20:27:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NEwNzjVjqL 00:33:12.220 20:27:15 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.NEwNzjVjqL 00:33:12.220 20:27:15 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NEwNzjVjqL 00:33:12.220 20:27:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NEwNzjVjqL 00:33:12.479 20:27:16 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:12.479 20:27:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:13.460 nvme0n1 00:33:13.460 20:27:16 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:33:13.460 20:27:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:13.460 20:27:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:13.460 20:27:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:13.460 20:27:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:13.460 20:27:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:13.726 20:27:17 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:33:13.726 20:27:17 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:33:13.726 20:27:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:13.983 20:27:17 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:33:13.983 20:27:17 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:33:13.983 20:27:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:13.983 20:27:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:13.983 20:27:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:14.549 20:27:18 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:33:14.549 20:27:18 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:33:14.549 20:27:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:14.549 20:27:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:14.549 20:27:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:14.549 20:27:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:14.549 20:27:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:14.807 20:27:18 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:33:14.807 20:27:18 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:14.807 20:27:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:15.065 20:27:18 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:33:15.065 20:27:18 keyring_file -- keyring/file.sh@104 -- # jq length 00:33:15.065 20:27:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:15.631 20:27:19 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:33:15.631 20:27:19 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NEwNzjVjqL 00:33:15.631 20:27:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NEwNzjVjqL 00:33:16.198 20:27:19 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.YjcJzt9atg 00:33:16.198 20:27:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.YjcJzt9atg 00:33:16.456 20:27:20 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:16.456 20:27:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:16.715 nvme0n1 00:33:16.715 20:27:20 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:33:16.715 20:27:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:17.281 20:27:20 keyring_file -- keyring/file.sh@112 -- # config='{ 00:33:17.281 "subsystems": [ 00:33:17.281 { 00:33:17.281 "subsystem": "keyring", 00:33:17.281 "config": [ 00:33:17.281 { 00:33:17.281 "method": "keyring_file_add_key", 00:33:17.281 "params": { 00:33:17.281 "name": "key0", 00:33:17.281 "path": "/tmp/tmp.NEwNzjVjqL" 00:33:17.281 } 00:33:17.281 }, 00:33:17.281 { 00:33:17.281 "method": "keyring_file_add_key", 00:33:17.281 "params": { 00:33:17.281 "name": "key1", 00:33:17.281 "path": "/tmp/tmp.YjcJzt9atg" 00:33:17.281 } 00:33:17.281 } 00:33:17.281 ] 00:33:17.281 }, 00:33:17.281 { 00:33:17.281 "subsystem": "iobuf", 00:33:17.281 "config": [ 00:33:17.281 { 00:33:17.281 "method": "iobuf_set_options", 00:33:17.281 "params": { 00:33:17.281 "small_pool_count": 8192, 00:33:17.281 "large_pool_count": 1024, 00:33:17.281 "small_bufsize": 8192, 00:33:17.281 "large_bufsize": 135168 00:33:17.281 } 00:33:17.281 } 00:33:17.281 ] 00:33:17.281 }, 00:33:17.281 { 00:33:17.281 "subsystem": "sock", 00:33:17.281 "config": [ 00:33:17.281 { 00:33:17.281 "method": "sock_set_default_impl", 00:33:17.281 "params": { 00:33:17.281 "impl_name": "posix" 00:33:17.281 } 00:33:17.281 }, 00:33:17.281 { 00:33:17.282 "method": "sock_impl_set_options", 00:33:17.282 "params": { 00:33:17.282 "impl_name": "ssl", 00:33:17.282 "recv_buf_size": 4096, 00:33:17.282 "send_buf_size": 4096, 00:33:17.282 "enable_recv_pipe": true, 00:33:17.282 "enable_quickack": false, 00:33:17.282 "enable_placement_id": 0, 00:33:17.282 "enable_zerocopy_send_server": true, 00:33:17.282 "enable_zerocopy_send_client": false, 00:33:17.282 "zerocopy_threshold": 0, 00:33:17.282 "tls_version": 0, 00:33:17.282 "enable_ktls": false 00:33:17.282 } 00:33:17.282 }, 00:33:17.282 { 00:33:17.282 "method": "sock_impl_set_options", 00:33:17.282 "params": { 00:33:17.282 "impl_name": "posix", 00:33:17.282 "recv_buf_size": 2097152, 00:33:17.282 "send_buf_size": 2097152, 00:33:17.282 "enable_recv_pipe": true, 00:33:17.282 "enable_quickack": false, 00:33:17.282 "enable_placement_id": 0, 00:33:17.282 "enable_zerocopy_send_server": true, 00:33:17.282 "enable_zerocopy_send_client": false, 00:33:17.282 "zerocopy_threshold": 0, 00:33:17.282 "tls_version": 0, 00:33:17.282 "enable_ktls": false 00:33:17.282 } 00:33:17.282 } 00:33:17.282 ] 00:33:17.282 }, 00:33:17.282 { 00:33:17.282 "subsystem": "vmd", 00:33:17.282 "config": [] 00:33:17.282 }, 00:33:17.282 { 00:33:17.282 "subsystem": "accel", 00:33:17.282 "config": [ 00:33:17.282 { 00:33:17.282 "method": "accel_set_options", 00:33:17.282 "params": { 00:33:17.282 "small_cache_size": 128, 00:33:17.282 "large_cache_size": 16, 00:33:17.282 "task_count": 2048, 00:33:17.282 "sequence_count": 2048, 00:33:17.282 "buf_count": 2048 00:33:17.282 } 00:33:17.282 } 00:33:17.282 ] 00:33:17.282 }, 00:33:17.282 { 00:33:17.282 "subsystem": "bdev", 00:33:17.282 "config": [ 00:33:17.282 { 00:33:17.282 "method": "bdev_set_options", 00:33:17.282 "params": { 00:33:17.282 "bdev_io_pool_size": 65535, 00:33:17.282 "bdev_io_cache_size": 256, 00:33:17.282 "bdev_auto_examine": true, 00:33:17.282 "iobuf_small_cache_size": 128, 00:33:17.282 "iobuf_large_cache_size": 16 00:33:17.282 } 00:33:17.282 }, 00:33:17.282 { 00:33:17.282 "method": "bdev_raid_set_options", 00:33:17.282 "params": { 00:33:17.282 "process_window_size_kb": 1024, 00:33:17.282 "process_max_bandwidth_mb_sec": 0 00:33:17.282 } 00:33:17.282 }, 00:33:17.282 { 00:33:17.282 "method": "bdev_iscsi_set_options", 00:33:17.282 "params": { 00:33:17.282 "timeout_sec": 30 00:33:17.282 } 00:33:17.282 }, 00:33:17.282 { 00:33:17.282 "method": "bdev_nvme_set_options", 00:33:17.282 "params": { 00:33:17.282 "action_on_timeout": "none", 00:33:17.282 "timeout_us": 0, 00:33:17.282 "timeout_admin_us": 0, 00:33:17.282 "keep_alive_timeout_ms": 10000, 00:33:17.282 "arbitration_burst": 0, 00:33:17.282 "low_priority_weight": 0, 00:33:17.282 "medium_priority_weight": 0, 00:33:17.282 "high_priority_weight": 0, 00:33:17.282 "nvme_adminq_poll_period_us": 10000, 00:33:17.282 "nvme_ioq_poll_period_us": 0, 00:33:17.282 "io_queue_requests": 512, 00:33:17.282 "delay_cmd_submit": true, 00:33:17.282 "transport_retry_count": 4, 00:33:17.282 "bdev_retry_count": 3, 00:33:17.282 "transport_ack_timeout": 0, 00:33:17.282 "ctrlr_loss_timeout_sec": 0, 00:33:17.282 "reconnect_delay_sec": 0, 00:33:17.282 "fast_io_fail_timeout_sec": 0, 00:33:17.282 "disable_auto_failback": false, 00:33:17.282 "generate_uuids": false, 00:33:17.282 "transport_tos": 0, 00:33:17.282 "nvme_error_stat": false, 00:33:17.282 "rdma_srq_size": 0, 00:33:17.282 "io_path_stat": false, 00:33:17.282 "allow_accel_sequence": false, 00:33:17.282 "rdma_max_cq_size": 0, 00:33:17.282 "rdma_cm_event_timeout_ms": 0, 00:33:17.282 "dhchap_digests": [ 00:33:17.282 "sha256", 00:33:17.282 "sha384", 00:33:17.282 "sha512" 00:33:17.282 ], 00:33:17.282 "dhchap_dhgroups": [ 00:33:17.282 "null", 00:33:17.282 "ffdhe2048", 00:33:17.282 "ffdhe3072", 00:33:17.282 "ffdhe4096", 00:33:17.282 "ffdhe6144", 00:33:17.282 "ffdhe8192" 00:33:17.282 ] 00:33:17.282 } 00:33:17.282 }, 00:33:17.282 { 00:33:17.282 "method": "bdev_nvme_attach_controller", 00:33:17.282 "params": { 00:33:17.282 "name": "nvme0", 00:33:17.282 "trtype": "TCP", 00:33:17.282 "adrfam": "IPv4", 00:33:17.282 "traddr": "127.0.0.1", 00:33:17.282 "trsvcid": "4420", 00:33:17.282 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:17.282 "prchk_reftag": false, 00:33:17.282 "prchk_guard": false, 00:33:17.282 "ctrlr_loss_timeout_sec": 0, 00:33:17.282 "reconnect_delay_sec": 0, 00:33:17.282 "fast_io_fail_timeout_sec": 0, 00:33:17.282 "psk": "key0", 00:33:17.282 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:17.282 "hdgst": false, 00:33:17.282 "ddgst": false 00:33:17.282 } 00:33:17.282 }, 00:33:17.282 { 00:33:17.282 "method": "bdev_nvme_set_hotplug", 00:33:17.282 "params": { 00:33:17.282 "period_us": 100000, 00:33:17.282 "enable": false 00:33:17.282 } 00:33:17.282 }, 00:33:17.282 { 00:33:17.282 "method": "bdev_wait_for_examine" 00:33:17.282 } 00:33:17.282 ] 00:33:17.282 }, 00:33:17.282 { 00:33:17.282 "subsystem": "nbd", 00:33:17.282 "config": [] 00:33:17.282 } 00:33:17.282 ] 00:33:17.282 }' 00:33:17.282 20:27:20 keyring_file -- keyring/file.sh@114 -- # killprocess 2200185 00:33:17.282 20:27:20 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2200185 ']' 00:33:17.282 20:27:20 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2200185 00:33:17.282 20:27:20 keyring_file -- common/autotest_common.sh@955 -- # uname 00:33:17.282 20:27:20 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:17.282 20:27:20 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2200185 00:33:17.282 20:27:20 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:17.282 20:27:20 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:17.282 20:27:20 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2200185' 00:33:17.282 killing process with pid 2200185 00:33:17.282 20:27:20 keyring_file -- common/autotest_common.sh@969 -- # kill 2200185 00:33:17.282 Received shutdown signal, test time was about 1.000000 seconds 00:33:17.282 00:33:17.282 Latency(us) 00:33:17.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:17.282 =================================================================================================================== 00:33:17.282 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:17.282 20:27:20 keyring_file -- common/autotest_common.sh@974 -- # wait 2200185 00:33:17.541 20:27:21 keyring_file -- keyring/file.sh@117 -- # bperfpid=2202685 00:33:17.541 20:27:21 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2202685 /var/tmp/bperf.sock 00:33:17.541 20:27:21 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2202685 ']' 00:33:17.541 20:27:21 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:17.541 20:27:21 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:17.541 20:27:21 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:17.541 20:27:21 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:33:17.541 "subsystems": [ 00:33:17.541 { 00:33:17.541 "subsystem": "keyring", 00:33:17.541 "config": [ 00:33:17.541 { 00:33:17.541 "method": "keyring_file_add_key", 00:33:17.541 "params": { 00:33:17.541 "name": "key0", 00:33:17.541 "path": "/tmp/tmp.NEwNzjVjqL" 00:33:17.541 } 00:33:17.541 }, 00:33:17.541 { 00:33:17.541 "method": "keyring_file_add_key", 00:33:17.541 "params": { 00:33:17.541 "name": "key1", 00:33:17.541 "path": "/tmp/tmp.YjcJzt9atg" 00:33:17.541 } 00:33:17.541 } 00:33:17.541 ] 00:33:17.541 }, 00:33:17.541 { 00:33:17.541 "subsystem": "iobuf", 00:33:17.541 "config": [ 00:33:17.541 { 00:33:17.541 "method": "iobuf_set_options", 00:33:17.541 "params": { 00:33:17.541 "small_pool_count": 8192, 00:33:17.541 "large_pool_count": 1024, 00:33:17.541 "small_bufsize": 8192, 00:33:17.541 "large_bufsize": 135168 00:33:17.541 } 00:33:17.541 } 00:33:17.541 ] 00:33:17.541 }, 00:33:17.541 { 00:33:17.541 "subsystem": "sock", 00:33:17.541 "config": [ 00:33:17.541 { 00:33:17.541 "method": "sock_set_default_impl", 00:33:17.541 "params": { 00:33:17.541 "impl_name": "posix" 00:33:17.541 } 00:33:17.541 }, 00:33:17.542 { 00:33:17.542 "method": "sock_impl_set_options", 00:33:17.542 "params": { 00:33:17.542 "impl_name": "ssl", 00:33:17.542 "recv_buf_size": 4096, 00:33:17.542 "send_buf_size": 4096, 00:33:17.542 "enable_recv_pipe": true, 00:33:17.542 "enable_quickack": false, 00:33:17.542 "enable_placement_id": 0, 00:33:17.542 "enable_zerocopy_send_server": true, 00:33:17.542 "enable_zerocopy_send_client": false, 00:33:17.542 "zerocopy_threshold": 0, 00:33:17.542 "tls_version": 0, 00:33:17.542 "enable_ktls": false 00:33:17.542 } 00:33:17.542 }, 00:33:17.542 { 00:33:17.542 "method": "sock_impl_set_options", 00:33:17.542 "params": { 00:33:17.542 "impl_name": "posix", 00:33:17.542 "recv_buf_size": 2097152, 00:33:17.542 "send_buf_size": 2097152, 00:33:17.542 "enable_recv_pipe": true, 00:33:17.542 "enable_quickack": false, 00:33:17.542 "enable_placement_id": 0, 00:33:17.542 "enable_zerocopy_send_server": true, 00:33:17.542 "enable_zerocopy_send_client": false, 00:33:17.542 "zerocopy_threshold": 0, 00:33:17.542 "tls_version": 0, 00:33:17.542 "enable_ktls": false 00:33:17.542 } 00:33:17.542 } 00:33:17.542 ] 00:33:17.542 }, 00:33:17.542 { 00:33:17.542 "subsystem": "vmd", 00:33:17.542 "config": [] 00:33:17.542 }, 00:33:17.542 { 00:33:17.542 "subsystem": "accel", 00:33:17.542 "config": [ 00:33:17.542 { 00:33:17.542 "method": "accel_set_options", 00:33:17.542 "params": { 00:33:17.542 "small_cache_size": 128, 00:33:17.542 "large_cache_size": 16, 00:33:17.542 "task_count": 2048, 00:33:17.542 "sequence_count": 2048, 00:33:17.542 "buf_count": 2048 00:33:17.542 } 00:33:17.542 } 00:33:17.542 ] 00:33:17.542 }, 00:33:17.542 { 00:33:17.542 "subsystem": "bdev", 00:33:17.542 "config": [ 00:33:17.542 { 00:33:17.542 "method": "bdev_set_options", 00:33:17.542 "params": { 00:33:17.542 "bdev_io_pool_size": 65535, 00:33:17.542 "bdev_io_cache_size": 256, 00:33:17.542 "bdev_auto_examine": true, 00:33:17.542 "iobuf_small_cache_size": 128, 00:33:17.542 "iobuf_large_cache_size": 16 00:33:17.542 } 00:33:17.542 }, 00:33:17.542 { 00:33:17.542 "method": "bdev_raid_set_options", 00:33:17.542 "params": { 00:33:17.542 "process_window_size_kb": 1024, 00:33:17.542 "process_max_bandwidth_mb_sec": 0 00:33:17.542 } 00:33:17.542 }, 00:33:17.542 { 00:33:17.542 "method": "bdev_iscsi_set_options", 00:33:17.542 "params": { 00:33:17.542 "timeout_sec": 30 00:33:17.542 } 00:33:17.542 }, 00:33:17.542 { 00:33:17.542 "method": "bdev_nvme_set_options", 00:33:17.542 "params": { 00:33:17.542 "action_on_timeout": "none", 00:33:17.542 "timeout_us": 0, 00:33:17.542 "timeout_admin_us": 0, 00:33:17.542 "keep_alive_timeout_ms": 10000, 00:33:17.542 "arbitration_burst": 0, 00:33:17.542 "low_priority_weight": 0, 00:33:17.542 "medium_priority_weight": 0, 00:33:17.542 "high_priority_weight": 0, 00:33:17.542 "nvme_adminq_poll_period_us": 10000, 00:33:17.542 "nvme_ioq_poll_period_us": 0, 00:33:17.542 "io_queue_requests": 512, 00:33:17.542 "delay_cmd_submit": true, 00:33:17.542 "transport_retry_count": 4, 00:33:17.542 "bdev_retry_count": 3, 00:33:17.542 "transport_ack_timeout": 0, 00:33:17.542 "ctrlr_loss_timeout_sec": 0, 00:33:17.542 "reconnect_delay_sec": 0, 00:33:17.542 "fast_io_fail_timeout_sec": 0, 00:33:17.542 "disable_auto_failback": false, 00:33:17.542 "generate_uuids": false, 00:33:17.542 "transport_tos": 0, 00:33:17.542 "nvme_error_stat": false, 00:33:17.542 "rdma_srq_size": 0, 00:33:17.542 "io_path_stat": false, 00:33:17.542 20:27:21 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:17.542 "allow_accel_sequence": false, 00:33:17.542 "rdma_max_cq_size": 0, 00:33:17.542 "rdma_cm_event_timeout_ms": 0, 00:33:17.542 "dhchap_digests": [ 00:33:17.542 "sha256", 00:33:17.542 "sha384", 00:33:17.542 "sha512" 00:33:17.542 ], 00:33:17.542 "dhchap_dhgroups": [ 00:33:17.542 "null", 00:33:17.542 "ffdhe2048", 00:33:17.542 "ffdhe3072", 00:33:17.542 "ffdhe4096", 00:33:17.542 "ffdhe6144", 00:33:17.542 "ffdhe8192" 00:33:17.542 ] 00:33:17.542 } 00:33:17.542 }, 00:33:17.542 { 00:33:17.542 "method": "bdev_nvme_attach_controller", 00:33:17.542 "params": { 00:33:17.542 "name": "nvme0", 00:33:17.542 "trtype": "TCP", 00:33:17.542 "adrfam": "IPv4", 00:33:17.542 "traddr": "127.0.0.1", 00:33:17.542 "trsvcid": "4420", 00:33:17.542 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:17.542 "prchk_reftag": false, 00:33:17.542 "prchk_guard": false, 00:33:17.542 "ctrlr_loss_timeout_sec": 0, 00:33:17.542 "reconnect_delay_sec": 0, 00:33:17.542 "fast_io_fail_timeout_sec": 0, 00:33:17.542 "psk": "key0", 00:33:17.542 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:17.542 "hdgst": false, 00:33:17.542 "ddgst": false 00:33:17.542 } 00:33:17.542 }, 00:33:17.542 { 00:33:17.542 "method": "bdev_nvme_set_hotplug", 00:33:17.542 "params": { 00:33:17.542 "period_us": 100000, 00:33:17.542 "enable": false 00:33:17.542 } 00:33:17.542 }, 00:33:17.542 { 00:33:17.542 "method": "bdev_wait_for_examine" 00:33:17.542 } 00:33:17.542 ] 00:33:17.542 }, 00:33:17.542 { 00:33:17.542 "subsystem": "nbd", 00:33:17.542 "config": [] 00:33:17.542 } 00:33:17.542 ] 00:33:17.542 }' 00:33:17.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:17.542 20:27:21 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:17.542 20:27:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:17.542 [2024-07-24 20:27:21.189148] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:33:17.542 [2024-07-24 20:27:21.189315] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2202685 ] 00:33:17.542 EAL: No free 2048 kB hugepages reported on node 1 00:33:17.542 [2024-07-24 20:27:21.296696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.801 [2024-07-24 20:27:21.436524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:18.059 [2024-07-24 20:27:21.642340] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:18.059 20:27:21 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:18.059 20:27:21 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:33:18.059 20:27:21 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:33:18.059 20:27:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:18.060 20:27:21 keyring_file -- keyring/file.sh@120 -- # jq length 00:33:18.628 20:27:22 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:18.628 20:27:22 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:33:18.628 20:27:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:18.628 20:27:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:18.628 20:27:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:18.628 20:27:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:18.628 20:27:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:18.886 20:27:22 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:18.886 20:27:22 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:33:18.886 20:27:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:18.886 20:27:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:18.886 20:27:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:18.886 20:27:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:18.886 20:27:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:19.143 20:27:22 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:19.143 20:27:22 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:19.143 20:27:22 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:19.143 20:27:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:19.401 20:27:23 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:19.401 20:27:23 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:19.401 20:27:23 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.NEwNzjVjqL /tmp/tmp.YjcJzt9atg 00:33:19.401 20:27:23 keyring_file -- keyring/file.sh@20 -- # killprocess 2202685 00:33:19.401 20:27:23 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2202685 ']' 00:33:19.401 20:27:23 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2202685 00:33:19.401 20:27:23 keyring_file -- common/autotest_common.sh@955 -- # uname 00:33:19.401 20:27:23 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:19.401 20:27:23 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2202685 00:33:19.401 20:27:23 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:19.401 20:27:23 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:19.401 20:27:23 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2202685' 00:33:19.401 killing process with pid 2202685 00:33:19.401 20:27:23 keyring_file -- common/autotest_common.sh@969 -- # kill 2202685 00:33:19.401 Received shutdown signal, test time was about 1.000000 seconds 00:33:19.401 00:33:19.401 Latency(us) 00:33:19.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.401 =================================================================================================================== 00:33:19.401 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:19.401 20:27:23 keyring_file -- common/autotest_common.sh@974 -- # wait 2202685 00:33:19.659 20:27:23 keyring_file -- keyring/file.sh@21 -- # killprocess 2200044 00:33:19.659 20:27:23 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2200044 ']' 00:33:19.659 20:27:23 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2200044 00:33:19.659 20:27:23 keyring_file -- common/autotest_common.sh@955 -- # uname 00:33:19.659 20:27:23 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:19.659 20:27:23 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2200044 00:33:19.917 20:27:23 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:19.917 20:27:23 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:19.917 20:27:23 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2200044' 00:33:19.917 killing process with pid 2200044 00:33:19.917 20:27:23 keyring_file -- common/autotest_common.sh@969 -- # kill 2200044 00:33:19.917 [2024-07-24 20:27:23.483355] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:19.917 20:27:23 keyring_file -- common/autotest_common.sh@974 -- # wait 2200044 00:33:20.485 00:33:20.485 real 0m20.510s 00:33:20.485 user 0m51.811s 00:33:20.485 sys 0m4.410s 00:33:20.485 20:27:24 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:20.485 20:27:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:20.485 ************************************ 00:33:20.485 END TEST keyring_file 00:33:20.485 ************************************ 00:33:20.485 20:27:24 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:33:20.485 20:27:24 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:20.485 20:27:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:20.485 20:27:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:20.485 20:27:24 -- common/autotest_common.sh@10 -- # set +x 00:33:20.485 ************************************ 00:33:20.485 START TEST keyring_linux 00:33:20.485 ************************************ 00:33:20.485 20:27:24 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:20.485 * Looking for test storage... 00:33:20.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:20.485 20:27:24 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:20.485 20:27:24 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:20.485 20:27:24 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:20.485 20:27:24 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:20.485 20:27:24 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:20.485 20:27:24 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:20.485 20:27:24 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:20.485 20:27:24 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:20.485 20:27:24 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:20.485 20:27:24 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:20.485 20:27:24 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:20.485 20:27:24 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:20.485 20:27:24 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:20.745 20:27:24 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:20.745 20:27:24 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:20.745 20:27:24 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:20.745 20:27:24 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.745 20:27:24 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.745 20:27:24 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.745 20:27:24 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:20.745 20:27:24 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:20.745 20:27:24 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:20.745 20:27:24 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:20.745 20:27:24 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:20.745 20:27:24 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:20.745 20:27:24 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:20.745 20:27:24 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:20.745 20:27:24 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:20.745 20:27:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:20.745 20:27:24 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:20.745 20:27:24 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:20.745 20:27:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:20.745 20:27:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:20.745 20:27:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:20.745 20:27:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:20.745 20:27:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:20.745 /tmp/:spdk-test:key0 00:33:20.745 20:27:24 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:20.745 20:27:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:20.745 20:27:24 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:20.745 20:27:24 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:20.745 20:27:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:20.745 20:27:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:20.745 20:27:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:20.745 20:27:24 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:20.745 20:27:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:20.745 20:27:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:20.745 /tmp/:spdk-test:key1 00:33:20.745 20:27:24 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2203050 00:33:20.745 20:27:24 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:20.745 20:27:24 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2203050 00:33:20.745 20:27:24 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2203050 ']' 00:33:20.745 20:27:24 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:20.745 20:27:24 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:20.745 20:27:24 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:20.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:20.745 20:27:24 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:20.745 20:27:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:20.745 [2024-07-24 20:27:24.427214] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:33:20.745 [2024-07-24 20:27:24.427304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2203050 ] 00:33:20.745 EAL: No free 2048 kB hugepages reported on node 1 00:33:20.745 [2024-07-24 20:27:24.525749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.003 [2024-07-24 20:27:24.723472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:21.569 20:27:25 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:21.569 20:27:25 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:33:21.569 20:27:25 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:21.569 20:27:25 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.569 20:27:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:21.569 [2024-07-24 20:27:25.138670] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:21.569 null0 00:33:21.569 [2024-07-24 20:27:25.171499] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:21.569 [2024-07-24 20:27:25.172300] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:21.569 20:27:25 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.569 20:27:25 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:21.569 172771516 00:33:21.569 20:27:25 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:21.569 838777213 00:33:21.569 20:27:25 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2203183 00:33:21.569 20:27:25 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:21.569 20:27:25 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2203183 /var/tmp/bperf.sock 00:33:21.569 20:27:25 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2203183 ']' 00:33:21.569 20:27:25 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:21.569 20:27:25 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:21.569 20:27:25 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:21.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:21.569 20:27:25 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:21.569 20:27:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:21.569 [2024-07-24 20:27:25.251229] Starting SPDK v24.09-pre git sha1 da8d49b2f / DPDK 24.03.0 initialization... 00:33:21.569 [2024-07-24 20:27:25.251324] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2203183 ] 00:33:21.569 EAL: No free 2048 kB hugepages reported on node 1 00:33:21.569 [2024-07-24 20:27:25.332619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.827 [2024-07-24 20:27:25.471921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:21.827 20:27:25 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:21.827 20:27:25 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:33:21.827 20:27:25 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:21.828 20:27:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:22.393 20:27:25 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:22.393 20:27:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:22.652 20:27:26 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:22.652 20:27:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:23.217 [2024-07-24 20:27:26.703350] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:23.217 nvme0n1 00:33:23.217 20:27:26 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:23.217 20:27:26 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:23.217 20:27:26 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:23.217 20:27:26 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:23.217 20:27:26 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:23.217 20:27:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:23.475 20:27:27 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:23.475 20:27:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:23.475 20:27:27 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:23.475 20:27:27 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:23.475 20:27:27 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:23.475 20:27:27 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:23.475 20:27:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:23.732 20:27:27 keyring_linux -- keyring/linux.sh@25 -- # sn=172771516 00:33:23.732 20:27:27 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:23.732 20:27:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:23.732 20:27:27 keyring_linux -- keyring/linux.sh@26 -- # [[ 172771516 == \1\7\2\7\7\1\5\1\6 ]] 00:33:23.732 20:27:27 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 172771516 00:33:23.733 20:27:27 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:23.733 20:27:27 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:23.990 Running I/O for 1 seconds... 00:33:24.925 00:33:24.925 Latency(us) 00:33:24.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.925 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:24.925 nvme0n1 : 1.02 5156.49 20.14 0.00 0.00 24621.67 7961.41 34175.81 00:33:24.925 =================================================================================================================== 00:33:24.925 Total : 5156.49 20.14 0.00 0.00 24621.67 7961.41 34175.81 00:33:24.925 0 00:33:24.925 20:27:28 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:24.925 20:27:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:25.183 20:27:28 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:25.183 20:27:28 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:25.183 20:27:28 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:25.183 20:27:28 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:25.183 20:27:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:25.183 20:27:28 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:25.747 20:27:29 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:25.747 20:27:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:25.747 20:27:29 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:25.747 20:27:29 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:25.747 20:27:29 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:33:25.747 20:27:29 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:25.747 20:27:29 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:25.747 20:27:29 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:25.747 20:27:29 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:25.747 20:27:29 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:25.747 20:27:29 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:25.747 20:27:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:26.004 [2024-07-24 20:27:29.757021] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:26.004 [2024-07-24 20:27:29.757829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1074fe0 (107): Transport endpoint is not connected 00:33:26.004 [2024-07-24 20:27:29.758818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1074fe0 (9): Bad file descriptor 00:33:26.004 [2024-07-24 20:27:29.759816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:26.004 [2024-07-24 20:27:29.759845] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:26.004 [2024-07-24 20:27:29.759877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:26.004 request: 00:33:26.004 { 00:33:26.004 "name": "nvme0", 00:33:26.004 "trtype": "tcp", 00:33:26.004 "traddr": "127.0.0.1", 00:33:26.004 "adrfam": "ipv4", 00:33:26.004 "trsvcid": "4420", 00:33:26.004 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:26.004 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:26.004 "prchk_reftag": false, 00:33:26.004 "prchk_guard": false, 00:33:26.004 "hdgst": false, 00:33:26.004 "ddgst": false, 00:33:26.004 "psk": ":spdk-test:key1", 00:33:26.004 "method": "bdev_nvme_attach_controller", 00:33:26.004 "req_id": 1 00:33:26.004 } 00:33:26.004 Got JSON-RPC error response 00:33:26.004 response: 00:33:26.004 { 00:33:26.004 "code": -5, 00:33:26.004 "message": "Input/output error" 00:33:26.004 } 00:33:26.004 20:27:29 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:33:26.004 20:27:29 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:26.004 20:27:29 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:26.004 20:27:29 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:26.004 20:27:29 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:26.004 20:27:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:26.004 20:27:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:26.004 20:27:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:26.004 20:27:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:26.004 20:27:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:26.004 20:27:29 keyring_linux -- keyring/linux.sh@33 -- # sn=172771516 00:33:26.004 20:27:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 172771516 00:33:26.262 1 links removed 00:33:26.262 20:27:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:26.262 20:27:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:26.262 20:27:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:26.262 20:27:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:26.262 20:27:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:26.262 20:27:29 keyring_linux -- keyring/linux.sh@33 -- # sn=838777213 00:33:26.262 20:27:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 838777213 00:33:26.262 1 links removed 00:33:26.262 20:27:29 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2203183 00:33:26.262 20:27:29 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2203183 ']' 00:33:26.262 20:27:29 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2203183 00:33:26.262 20:27:29 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:33:26.262 20:27:29 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:26.262 20:27:29 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2203183 00:33:26.262 20:27:29 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:26.262 20:27:29 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:26.262 20:27:29 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2203183' 00:33:26.262 killing process with pid 2203183 00:33:26.262 20:27:29 keyring_linux -- common/autotest_common.sh@969 -- # kill 2203183 00:33:26.262 Received shutdown signal, test time was about 1.000000 seconds 00:33:26.262 00:33:26.262 Latency(us) 00:33:26.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.262 =================================================================================================================== 00:33:26.262 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:26.262 20:27:29 keyring_linux -- common/autotest_common.sh@974 -- # wait 2203183 00:33:26.530 20:27:30 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2203050 00:33:26.530 20:27:30 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2203050 ']' 00:33:26.530 20:27:30 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2203050 00:33:26.530 20:27:30 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:33:26.530 20:27:30 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:26.530 20:27:30 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2203050 00:33:26.530 20:27:30 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:26.530 20:27:30 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:26.530 20:27:30 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2203050' 00:33:26.530 killing process with pid 2203050 00:33:26.530 20:27:30 keyring_linux -- common/autotest_common.sh@969 -- # kill 2203050 00:33:26.530 20:27:30 keyring_linux -- common/autotest_common.sh@974 -- # wait 2203050 00:33:27.106 00:33:27.106 real 0m6.628s 00:33:27.106 user 0m13.067s 00:33:27.106 sys 0m1.972s 00:33:27.106 20:27:30 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:27.106 20:27:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:27.106 ************************************ 00:33:27.106 END TEST keyring_linux 00:33:27.106 ************************************ 00:33:27.106 20:27:30 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:33:27.106 20:27:30 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:33:27.106 20:27:30 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:33:27.106 20:27:30 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:33:27.106 20:27:30 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:33:27.106 20:27:30 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:27.106 20:27:30 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:27.106 20:27:30 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:33:27.106 20:27:30 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:33:27.106 20:27:30 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:33:27.106 20:27:30 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:33:27.106 20:27:30 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:27.106 20:27:30 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:27.106 20:27:30 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:27.106 20:27:30 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:33:27.106 20:27:30 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:33:27.106 20:27:30 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:33:27.106 20:27:30 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:27.106 20:27:30 -- common/autotest_common.sh@10 -- # set +x 00:33:27.106 20:27:30 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:33:27.106 20:27:30 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:27.106 20:27:30 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:27.106 20:27:30 -- common/autotest_common.sh@10 -- # set +x 00:33:29.638 INFO: APP EXITING 00:33:29.638 INFO: killing all VMs 00:33:29.638 INFO: killing vhost app 00:33:29.638 INFO: EXIT DONE 00:33:31.540 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:33:31.540 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:33:31.540 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:33:31.540 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:33:31.540 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:33:31.540 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:33:31.540 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:33:31.540 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:33:31.540 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:33:31.540 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:33:31.540 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:33:31.540 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:33:31.540 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:33:31.540 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:33:31.540 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:33:31.540 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:33:31.540 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:33:33.443 Cleaning 00:33:33.443 Removing: /var/run/dpdk/spdk0/config 00:33:33.443 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:33.443 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:33.443 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:33.443 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:33.443 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:33.443 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:33.443 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:33.443 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:33.443 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:33.443 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:33.443 Removing: /var/run/dpdk/spdk1/config 00:33:33.443 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:33.443 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:33.443 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:33.443 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:33.443 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:33.443 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:33.443 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:33.443 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:33.443 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:33.443 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:33.443 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:33.443 Removing: /var/run/dpdk/spdk2/config 00:33:33.443 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:33.443 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:33.443 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:33.443 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:33.443 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:33.443 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:33.443 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:33.443 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:33.443 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:33.443 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:33.443 Removing: /var/run/dpdk/spdk3/config 00:33:33.443 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:33.443 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:33.443 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:33.443 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:33.443 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:33.443 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:33.443 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:33.443 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:33.443 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:33.443 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:33.443 Removing: /var/run/dpdk/spdk4/config 00:33:33.443 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:33.443 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:33.443 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:33.444 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:33.444 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:33.444 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:33.444 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:33.444 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:33.444 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:33.444 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:33.444 Removing: /dev/shm/bdev_svc_trace.1 00:33:33.444 Removing: /dev/shm/nvmf_trace.0 00:33:33.444 Removing: /dev/shm/spdk_tgt_trace.pid1918788 00:33:33.444 Removing: /var/run/dpdk/spdk0 00:33:33.444 Removing: /var/run/dpdk/spdk1 00:33:33.444 Removing: /var/run/dpdk/spdk2 00:33:33.444 Removing: /var/run/dpdk/spdk3 00:33:33.444 Removing: /var/run/dpdk/spdk4 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1916979 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1917851 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1918788 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1919358 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1920051 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1920322 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1921045 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1921054 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1921421 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1922822 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1923799 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1924252 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1924557 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1924890 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1925206 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1925616 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1926025 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1926336 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1926649 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1929547 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1929840 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1930139 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1930275 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1930843 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1930865 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1931535 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1931552 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1931846 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1931978 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1932142 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1932277 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1932781 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1933060 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1933259 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1935606 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1938526 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1945880 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1946297 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1948827 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1949107 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1952019 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1956133 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1959459 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1966564 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1972055 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1973282 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1974036 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1985125 00:33:33.444 Removing: /var/run/dpdk/spdk_pid1987525 00:33:33.444 Removing: /var/run/dpdk/spdk_pid2015116 00:33:33.444 Removing: /var/run/dpdk/spdk_pid2018506 00:33:33.444 Removing: /var/run/dpdk/spdk_pid2022774 00:33:33.444 Removing: /var/run/dpdk/spdk_pid2027020 00:33:33.444 Removing: /var/run/dpdk/spdk_pid2027038 00:33:33.444 Removing: /var/run/dpdk/spdk_pid2027782 00:33:33.444 Removing: /var/run/dpdk/spdk_pid2028828 00:33:33.444 Removing: /var/run/dpdk/spdk_pid2029483 00:33:33.444 Removing: /var/run/dpdk/spdk_pid2029884 00:33:33.444 Removing: /var/run/dpdk/spdk_pid2030006 00:33:33.444 Removing: /var/run/dpdk/spdk_pid2030148 00:33:33.444 Removing: /var/run/dpdk/spdk_pid2030286 00:33:33.444 Removing: /var/run/dpdk/spdk_pid2030413 00:33:33.444 Removing: /var/run/dpdk/spdk_pid2030949 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2031601 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2032162 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2032551 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2032666 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2032800 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2034205 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2034960 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2040504 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2074132 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2077836 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2078917 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2080303 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2080365 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2080496 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2080755 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2081328 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2082649 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2083510 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2084061 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2085804 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2086255 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2086805 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2089446 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2095631 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2098274 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2102204 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2103335 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2104968 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2107689 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2110072 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2114639 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2114698 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2117765 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2117998 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2118136 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2118400 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2118411 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2121420 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2121773 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2124781 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2126691 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2130390 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2134125 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2142261 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2146747 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2146755 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2161801 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2162342 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2162878 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2163293 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2164002 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2164535 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2165065 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2165604 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2168249 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2168507 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2172926 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2173106 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2174715 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2179888 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2179893 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2182964 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2184355 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2185750 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2186611 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2188016 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2188895 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2194457 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2194842 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2195230 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2196802 00:33:33.703 Removing: /var/run/dpdk/spdk_pid2197198 00:33:33.704 Removing: /var/run/dpdk/spdk_pid2197473 00:33:33.704 Removing: /var/run/dpdk/spdk_pid2200044 00:33:33.704 Removing: /var/run/dpdk/spdk_pid2200185 00:33:33.704 Removing: /var/run/dpdk/spdk_pid2202685 00:33:33.704 Removing: /var/run/dpdk/spdk_pid2203050 00:33:33.704 Removing: /var/run/dpdk/spdk_pid2203183 00:33:33.704 Clean 00:33:33.962 20:27:37 -- common/autotest_common.sh@1451 -- # return 0 00:33:33.962 20:27:37 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:33:33.962 20:27:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:33.962 20:27:37 -- common/autotest_common.sh@10 -- # set +x 00:33:33.962 20:27:37 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:33:33.962 20:27:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:33.962 20:27:37 -- common/autotest_common.sh@10 -- # set +x 00:33:33.962 20:27:37 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:33.962 20:27:37 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:33.962 20:27:37 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:33.962 20:27:37 -- spdk/autotest.sh@395 -- # hash lcov 00:33:33.962 20:27:37 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:33.962 20:27:37 -- spdk/autotest.sh@397 -- # hostname 00:33:33.962 20:27:37 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:34.528 geninfo: WARNING: invalid characters removed from testname! 00:34:55.955 20:28:52 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:55.955 20:28:59 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:04.076 20:29:07 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:12.202 20:29:15 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:20.327 20:29:23 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:28.449 20:29:32 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:36.599 20:29:40 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:36.599 20:29:40 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:36.599 20:29:40 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:35:36.599 20:29:40 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:36.599 20:29:40 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:36.599 20:29:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.599 20:29:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.599 20:29:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.599 20:29:40 -- paths/export.sh@5 -- $ export PATH 00:35:36.599 20:29:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.599 20:29:40 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:35:36.599 20:29:40 -- common/autobuild_common.sh@447 -- $ date +%s 00:35:36.599 20:29:40 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721845780.XXXXXX 00:35:36.599 20:29:40 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721845780.XyIvcb 00:35:36.599 20:29:40 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:35:36.599 20:29:40 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:35:36.599 20:29:40 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:35:36.599 20:29:40 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:35:36.599 20:29:40 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:35:36.599 20:29:40 -- common/autobuild_common.sh@463 -- $ get_config_params 00:35:36.599 20:29:40 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:35:36.599 20:29:40 -- common/autotest_common.sh@10 -- $ set +x 00:35:36.599 20:29:40 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:35:36.599 20:29:40 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:35:36.599 20:29:40 -- pm/common@17 -- $ local monitor 00:35:36.599 20:29:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:36.599 20:29:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:36.599 20:29:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:36.599 20:29:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:36.599 20:29:40 -- pm/common@21 -- $ date +%s 00:35:36.599 20:29:40 -- pm/common@25 -- $ sleep 1 00:35:36.599 20:29:40 -- pm/common@21 -- $ date +%s 00:35:36.599 20:29:40 -- pm/common@21 -- $ date +%s 00:35:36.599 20:29:40 -- pm/common@21 -- $ date +%s 00:35:36.599 20:29:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721845780 00:35:36.599 20:29:40 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721845780 00:35:36.599 20:29:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721845780 00:35:36.599 20:29:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721845780 00:35:36.599 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721845780_collect-vmstat.pm.log 00:35:36.599 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721845780_collect-cpu-load.pm.log 00:35:36.599 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721845780_collect-cpu-temp.pm.log 00:35:36.599 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721845780_collect-bmc-pm.bmc.pm.log 00:35:37.977 20:29:41 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:35:37.977 20:29:41 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:35:37.977 20:29:41 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:37.977 20:29:41 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:35:37.977 20:29:41 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:35:37.977 20:29:41 -- spdk/autopackage.sh@19 -- $ timing_finish 00:35:37.977 20:29:41 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:37.977 20:29:41 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:35:37.977 20:29:41 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:37.977 20:29:41 -- spdk/autopackage.sh@20 -- $ exit 0 00:35:37.977 20:29:41 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:35:37.977 20:29:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:35:37.977 20:29:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:35:37.977 20:29:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:37.977 20:29:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:35:37.977 20:29:41 -- pm/common@44 -- $ pid=2213969 00:35:37.977 20:29:41 -- pm/common@50 -- $ kill -TERM 2213969 00:35:37.977 20:29:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:37.977 20:29:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:35:37.977 20:29:41 -- pm/common@44 -- $ pid=2213970 00:35:37.977 20:29:41 -- pm/common@50 -- $ kill -TERM 2213970 00:35:37.977 20:29:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:37.977 20:29:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:35:37.977 20:29:41 -- pm/common@44 -- $ pid=2213972 00:35:37.977 20:29:41 -- pm/common@50 -- $ kill -TERM 2213972 00:35:37.977 20:29:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:37.977 20:29:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:35:37.977 20:29:41 -- pm/common@44 -- $ pid=2214000 00:35:37.977 20:29:41 -- pm/common@50 -- $ sudo -E kill -TERM 2214000 00:35:37.977 + [[ -n 1825424 ]] 00:35:37.977 + sudo kill 1825424 00:35:37.990 [Pipeline] } 00:35:38.011 [Pipeline] // stage 00:35:38.019 [Pipeline] } 00:35:38.040 [Pipeline] // timeout 00:35:38.047 [Pipeline] } 00:35:38.067 [Pipeline] // catchError 00:35:38.075 [Pipeline] } 00:35:38.095 [Pipeline] // wrap 00:35:38.103 [Pipeline] } 00:35:38.120 [Pipeline] // catchError 00:35:38.132 [Pipeline] stage 00:35:38.135 [Pipeline] { (Epilogue) 00:35:38.152 [Pipeline] catchError 00:35:38.154 [Pipeline] { 00:35:38.171 [Pipeline] echo 00:35:38.173 Cleanup processes 00:35:38.180 [Pipeline] sh 00:35:38.467 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:38.467 2214099 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:35:38.467 2214234 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:38.482 [Pipeline] sh 00:35:38.766 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:38.766 ++ grep -v 'sudo pgrep' 00:35:38.766 ++ awk '{print $1}' 00:35:38.766 + sudo kill -9 2214099 00:35:38.780 [Pipeline] sh 00:35:39.065 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:05.631 [Pipeline] sh 00:36:05.922 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:05.922 Artifacts sizes are good 00:36:05.941 [Pipeline] archiveArtifacts 00:36:05.949 Archiving artifacts 00:36:06.231 [Pipeline] sh 00:36:06.517 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:06.532 [Pipeline] cleanWs 00:36:06.541 [WS-CLEANUP] Deleting project workspace... 00:36:06.541 [WS-CLEANUP] Deferred wipeout is used... 00:36:06.548 [WS-CLEANUP] done 00:36:06.550 [Pipeline] } 00:36:06.571 [Pipeline] // catchError 00:36:06.586 [Pipeline] sh 00:36:06.866 + logger -p user.info -t JENKINS-CI 00:36:06.873 [Pipeline] } 00:36:06.886 [Pipeline] // stage 00:36:06.890 [Pipeline] } 00:36:06.904 [Pipeline] // node 00:36:06.908 [Pipeline] End of Pipeline 00:36:06.942 Finished: SUCCESS